1
|
Christian M, Oosthuizen WC, Bester MN, de Bruyn PJN. Robustly estimating the demographic contribution of immigration: Simulation, sensitivity analysis and seals. J Anim Ecol 2024; 93:632-645. [PMID: 38297453 DOI: 10.1111/1365-2656.14053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 01/11/2024] [Indexed: 02/02/2024]
Abstract
Identifying important demographic drivers of population dynamics is fundamental for understanding life-history evolution and implementing effective conservation measures. Integrated population models (IPMs) coupled with transient life table response experiments (tLTREs) allow ecologists to quantify the contributions of demographic parameters to observed population change. While IPMs can estimate parameters that are not estimable using any data source alone, for example, immigration, the estimated contribution of such parameters to population change is prone to bias. Currently, it is unclear when robust conclusions can be drawn from them. We sought to understand the drivers of a rebounding southern elephant seal population on Marion Island using the IPM-tLTRE framework, applied to count and mark-recapture data on 9500 female seals over nearly 40 years. Given the uncertainty around IPM-tLTRE estimates of immigration, we also aimed to investigate the utility of simulation and sensitivity analyses as general tools for evaluating the robustness of conclusions obtained in this framework. Using a Bayesian IPM and tLTRE analysis, we quantified the contributions of survival, immigration and population structure to population growth. We assessed the sensitivity of our estimates to choice of multivariate priors on immigration and other vital rates. To do so we make a novel application of Gaussian process priors, in comparison with commonly used shrinkage priors. Using simulation, we assessed our model's ability to estimate the demographic contribution of immigration under different levels of temporal variance in immigration. The tLTRE analysis suggested that adult survival and immigration were the most important drivers of recent population growth. While the contribution of immigration was sensitive to prior choices, the estimate was consistently large. Furthermore, our simulation study validated the importance of immigration by showing that our estimate of its demographic contribution is unlikely to result as a biased overestimate. Our results highlight the connectivity between distant populations of southern elephant seals, illustrating that female dispersal can be important in regulating the abundance of local populations even when natal site fidelity is high. More generally, we demonstrate how robust ecological conclusions may be obtained about immigration from the IPM-tLTRE framework, by combining sensitivity analysis and simulation.
Collapse
Affiliation(s)
- Murray Christian
- Department of Zoology and Entomology, Mammal Research Institute, University of Pretoria, Hatfield, South Africa
- Centre for Statistics in Ecology, the Environment and Conservation, Department of Statistical Sciences, University of Cape Town, Cape Town, South Africa
| | - W Chris Oosthuizen
- Centre for Statistics in Ecology, the Environment and Conservation, Department of Statistical Sciences, University of Cape Town, Cape Town, South Africa
| | - Marthán N Bester
- Department of Zoology and Entomology, Mammal Research Institute, University of Pretoria, Hatfield, South Africa
| | - P J Nico de Bruyn
- Department of Zoology and Entomology, Mammal Research Institute, University of Pretoria, Hatfield, South Africa
| |
Collapse
|
2
|
Zhang R, Chen L, Oliver LD, Voineskos AN, Park JY. SAN: Mitigating spatial covariance heterogeneity in cortical thickness data collected from multiple scanners or sites. Hum Brain Mapp 2024; 45:e26692. [PMID: 38712767 PMCID: PMC11075170 DOI: 10.1002/hbm.26692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/27/2024] [Accepted: 04/08/2024] [Indexed: 05/08/2024] Open
Abstract
In neuroimaging studies, combining data collected from multiple study sites or scanners is becoming common to increase the reproducibility of scientific discoveries. At the same time, unwanted variations arise by using different scanners (inter-scanner biases), which need to be corrected before downstream analyses to facilitate replicable research and prevent spurious findings. While statistical harmonization methods such as ComBat have become popular in mitigating inter-scanner biases in neuroimaging, recent methodological advances have shown that harmonizing heterogeneous covariances results in higher data quality. In vertex-level cortical thickness data, heterogeneity in spatial autocorrelation is a critical factor that affects covariance heterogeneity. Our work proposes a new statistical harmonization method called spatial autocorrelation normalization (SAN) that preserves homogeneous covariance vertex-level cortical thickness data across different scanners. We use an explicit Gaussian process to characterize scanner-invariant and scanner-specific variations to reconstruct spatially homogeneous data across scanners. SAN is computationally feasible, and it easily allows the integration of existing harmonization methods. We demonstrate the utility of the proposed method using cortical thickness data from the Social Processes Initiative in the Neurobiology of the Schizophrenia(s) (SPINS) study. SAN is publicly available as an R package.
Collapse
Affiliation(s)
- Rongqian Zhang
- Department of Statistical SciencesUniversity of TorontoTorontoOntarioCanada
| | - Linxi Chen
- Department of Statistical SciencesUniversity of TorontoTorontoOntarioCanada
| | | | - Aristotle N. Voineskos
- Centre for Addiction and Mental HealthTorontoOntarioCanada
- Department of PsychiatryUniversity of TorontoTorontoOntarioCanada
| | - Jun Young Park
- Department of Statistical SciencesUniversity of TorontoTorontoOntarioCanada
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
| |
Collapse
|
3
|
Csizi KS, Reiher M. Automated preparation of nanoscopic structures: Graph-based sequence analysis, mismatch detection, and pH-consistent protonation with uncertainty estimates. J Comput Chem 2024; 45:761-776. [PMID: 38124290 DOI: 10.1002/jcc.27276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 11/14/2023] [Indexed: 12/23/2023]
Abstract
Structure and function in nanoscale atomistic assemblies are tightly coupled, and every atom with its specific position and even every electron will have a decisive effect on the electronic structure, and hence, on the molecular properties. Molecular simulations of nanoscopic atomistic structures therefore require accurately resolved three-dimensional input structures. If extracted from experiment, these structures often suffer from severe uncertainties, of which the lack of information on hydrogen atoms is a prominent example. Hence, experimental structures require careful review and curation, which is a time-consuming and error-prone process. Here, we present a fast and robust protocol for the automated structure analysis and pH-consistent protonation, in short, ASAP. For biomolecules as a target, the ASAP protocol integrates sequence analysis and error assessment of a given input structure. ASAP allows for pK a prediction from reference data through Gaussian process regression including uncertainty estimation and connects to system-focused atomistic modeling described in Brunken and Reiher (J. Chem. Theory Comput. 16, 2020, 1646). Although focused on biomolecules, ASAP can be extended to other nanoscopic objects, because most of its design elements rely on a general graph-based foundation guaranteeing transferability. The modular character of the underlying pipeline supports different degrees of automation, which allows for (i) efficient feedback loops for human-machine interaction with a low entrance barrier and for (ii) integration into autonomous procedures such as automated force field parametrizations. This facilitates fast switching of the pH-state through on-the-fly system-focused reparametrization during a molecular simulation at virtually no extra computational cost.
Collapse
Affiliation(s)
- Katja-Sophia Csizi
- Department of Chemistry and Applied Biosciences, ETH Zurich, Zurich, Switzerland
| | - Markus Reiher
- Department of Chemistry and Applied Biosciences, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
4
|
Van Brandt S, Kapusuz KY, Sennesael J, Lemey S, Van Torre P, Verhaevert J, Van Hecke T, Rogier H. Reliability Analysis and Optimization of a Reconfigurable Matching Network for Communication and Sensing Antennas in Dynamic Environments through Gaussian Process Regression. Sensors (Basel) 2024; 24:2689. [PMID: 38732793 PMCID: PMC11085883 DOI: 10.3390/s24092689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/13/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024]
Abstract
During the implementation of the Internet of Things (IoT), the performance of communication and sensing antennas that are embedded in smart surfaces or smart devices can be affected by objects in their reactive near field due to detuning and antenna mismatch. Matching networks have been proposed to re-establish impedance matching when antennas become detuned due to environmental factors. In this work, the change in the reflection coefficient at the antenna, due to the presence of objects, is first characterized as a function of the frequency and object distance by applying Gaussian process regression on experimental data. Based on this characterization, for random object positions, it is shown through simulation that a dynamic environment can lower the reliability of a matching network by up to 90%, depending on the type of object, the probability distribution of the object distance, and the required bandwidth. As an alternative to complex and power-consuming real-time adaptive matching, a new, resilient network tuning strategy is proposed that takes into account these random variations. This new approach increases the reliability of the system by 10% to 40% in these dynamic environment scenarios.
Collapse
Affiliation(s)
- Seppe Van Brandt
- Internet Technology and Data Science Lab, Department of Information Technology, Faculty of Engineering and Architecture, Ghent University and Imec, 9052 Gent, Belgium (J.V.); (H.R.)
| | | | | | | | | | | | | | | |
Collapse
|
5
|
Long M, Song J, Rong Z, Mi L, Song Y, Hou Y. Adaptively leverage multiple real-world data sources for treatment effect estimation based on similarity. J Biopharm Stat 2024:1-11. [PMID: 38557411 DOI: 10.1080/10543406.2024.2330202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
The incorporation of real-world data (RWD) into medical product development and evaluation has exhibited consistent growth. However, there is no universally adopted method of how much information to borrow from external data. This paper proposes a study design methodology called Tree-based Monte Carlo (TMC) that dynamically integrates patients from various RWD sources to calculate the treatment effect based on the similarity between clinical trial and RWD. Initially, a propensity score is developed to gauge the resemblance between clinical trial data and each real-world dataset. Utilizing this similarity metric, we construct a hierarchical clustering tree that delineates varying degrees of similarity between each RWD source and the clinical trial data. Ultimately, a Gaussian process methodology is employed across this hierarchical clustering framework to synthesize the projected treatment effects of the external group. Simulation result shows that our clustering tree could successfully identify similarity. Data sources exhibiting greater similarity with clinical trial are accorded higher weights in treatment estimation process, while less congruent sources receive comparatively lower emphasis. Compared with another Bayesian method, meta-analytic predictive prior (MAP), our proposed method's estimator is closer to the true value and has smaller bias.
Collapse
Affiliation(s)
- Meihua Long
- Department of Biostatistics, Peking University, Beijing, China
| | - Jiali Song
- Department of Biostatistics, Peking University, Beijing, China
| | - Zhiwei Rong
- Department of Biostatistics, Peking University, Beijing, China
| | - Lan Mi
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Lymphoma, Peking University Cancer Hospital & Institute, Beijing, China
| | - Yuqin Song
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Lymphoma, Peking University Cancer Hospital & Institute, Beijing, China
| | - Yan Hou
- Department of Biostatistics, Peking University, Beijing, China
- Department of Lymphoma, Peking University Cancer Hospital & Institute, Beijing, China
- Peking University Clinical Research Center, Peking University, Beijing, China
| |
Collapse
|
6
|
Cremaschi A, De Iorio M, Kothandaraman N, Yap F, Tint MT, Eriksson J. Joint modeling of association networks and longitudinal biomarkers: An application to childhood obesity. Stat Med 2024; 43:1135-1152. [PMID: 38197220 DOI: 10.1002/sim.9994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 11/30/2023] [Accepted: 12/02/2023] [Indexed: 01/11/2024]
Abstract
The prevalence of chronic non-communicable diseases such as obesity has noticeably increased in the last decade. The study of these diseases in early life is of paramount importance in determining their course in adult life and in supporting clinical interventions. Recently, attention has been drawn to approaches that study the alteration of metabolic pathways in obese children. In this work, we propose a novel joint modeling approach for the analysis of growth biomarkers and metabolite associations, to unveil metabolic pathways related to childhood obesity. Within a Bayesian framework, we flexibly model the temporal evolution of growth trajectories and metabolic associations through the specification of a joint nonparametric random effect distribution, with the main goal of clustering subjects, thus identifying risk sub-groups. Growth profiles as well as patterns of metabolic associations determine the clustering structure. Inclusion of risk factors is straightforward through the specification of a regression term. We demonstrate the proposed approach on data from the Growing Up in Singapore Towards healthy Outcomes cohort study, based in Singapore. Posterior inference is obtained via a tailored MCMC algorithm, involving a nonparametric prior with mixed support. Our analysis has identified potential key pathways in obese children that allow for the exploration of possible molecular mechanisms associated with childhood obesity.
Collapse
Affiliation(s)
| | - Maria De Iorio
- Singapore Institute for Clinical Sciences, A*STAR, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Statistical Science, University College London, London, UK
| | | | - Fabian Yap
- Department of Paediatrics, KK Women's and Children's Hospital, Singapore
| | - Mya Thway Tint
- Singapore Institute for Clinical Sciences, A*STAR, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Johan Eriksson
- Singapore Institute for Clinical Sciences, A*STAR, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| |
Collapse
|
7
|
Fraehr N, Wang QJ, Wu W, Nathan R. Assessment of surrogate models for flood inundation: The physics-guided LSG model vs. state-of-the-art machine learning models. Water Res 2024; 252:121202. [PMID: 38290237 DOI: 10.1016/j.watres.2024.121202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 01/21/2024] [Accepted: 01/23/2024] [Indexed: 02/01/2024]
Abstract
Hydrodynamic models can accurately simulate flood inundation but are limited by their high computational demand that scales non-linearly with model complexity, resolution, and domain size. Therefore, it is often not feasible to use high-resolution hydrodynamic models for real-time flood predictions or when a large number of predictions are needed for probabilistic flood design. Computationally efficient surrogate models have been developed to address this issue. The recently developed Low-fidelity, Spatial analysis, and Gaussian Process Learning (LSG) model has shown strong performance in both computational efficiency and simulation accuracy. The LSG model is a physics-guided surrogate model that simulates flood inundation by first using an extremely coarse and simplified (i.e. low-fidelity) hydrodynamic model to provide an initial estimate of flood inundation. Then, the low-fidelity estimate is upskilled via Empirical Orthogonal Functions (EOF) analysis and Sparse Gaussian Process models to provide accurate high-resolution predictions. Despite the promising results achieved thus far, the LSG model has not been benchmarked against other surrogate models. Such a comparison is needed to fully understand the value of the LSG model and to provide guidance for future research efforts in flood inundation simulation. This study compares the LSG model to four state-of-the-art surrogate flood inundation models. The surrogate models are assessed for their ability to simulate the temporal and spatial evolution of flood inundation for events both within and beyond the range used for model training. The models are evaluated for three distinct case studies in Australia and the United Kingdom. The LSG model is found to be superior in accuracy for both flood extent and water depth, including when applied to flood events outside the range of training data used, while achieving high computational efficiency. In addition, the low-fidelity model is found to play a crucial role in achieving the overall superior performance of the LSG model.
Collapse
Affiliation(s)
- Niels Fraehr
- Department of Infrastructure Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, Victoria 3010, Australia.
| | - Quan J Wang
- Department of Infrastructure Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, Victoria 3010, Australia
| | - Wenyan Wu
- Department of Infrastructure Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, Victoria 3010, Australia
| | - Rory Nathan
- Department of Infrastructure Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, Victoria 3010, Australia
| |
Collapse
|
8
|
Xing H, Yau C. Bayesian inference for identifying tumour-specific cancer dependencies through integration of ex-vivo drug response assays and drug-protein profiling. BMC Bioinformatics 2024; 25:104. [PMID: 38459430 PMCID: PMC10921766 DOI: 10.1186/s12859-024-05682-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 01/29/2024] [Indexed: 03/10/2024] Open
Abstract
The identification of tumor-specific molecular dependencies is essential for the development of effective cancer therapies. Genetic and chemical perturbations are powerful tools for discovering these dependencies. Even though chemical perturbations can be applied to primary cancer samples at large scale, the interpretation of experiment outcomes is often complicated by the fact that one chemical compound can affect multiple proteins. To overcome this challenge, Batzilla et al. (PLoS Comput Biol 18(8): e1010438, 2022) proposed DepInfeR, a regularized multi-response regression model designed to identify and estimate specific molecular dependencies of individual cancers from their ex-vivo drug sensitivity profiles. Inspired by their work, we propose a Bayesian extension to DepInfeR. Our proposed approach offers several advantages over DepInfeR, including e.g. the ability to handle missing values in both protein-drug affinity and drug sensitivity profiles without the need for data pre-processing steps such as imputation. Moreover, our approach uses Gaussian Processes to capture more complex molecular dependency structures, and provides probabilistic statements about whether a protein in the protein-drug affinity profiles is informative to the drug sensitivity profiles. Simulation studies demonstrate that our proposed approach achieves better prediction accuracy, and is able to discover unreported dependency structures.
Collapse
Affiliation(s)
- Hanwen Xing
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - Christopher Yau
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK.
- Health Data Research UK, London, UK.
| |
Collapse
|
9
|
Côté JN, Germain M, Levac E, Lavigne E. Vulnerability assessment of heat waves within a risk framework using artificial intelligence. Sci Total Environ 2024; 912:169355. [PMID: 38123103 DOI: 10.1016/j.scitotenv.2023.169355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Revised: 12/06/2023] [Accepted: 12/11/2023] [Indexed: 12/23/2023]
Abstract
Current efforts to adapt to climate change are not sufficient to reduce projected impacts. Vulnerability assessments are essential to allocate resources where they are needed most. However, current assessments that use principal component analysis suffer from multiple shortcomings and are hard to translate into concrete actions. To address these issues, this article proposes a novel data-driven vulnerability assessment within a risk framework. The framework is based on the definitions from the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, but some definitions, such as sensitivity and adaptive capacity, are clarified. Heat waves that occurred between 2001 and 2018 in Quebec (Canada) are used to validate the framework. The studied impact is the daily mortality rates per cooling degree-days (CDD) region. A vulnerability map is produced to identify the distributions of summer mortality rates in aggregate dissemination areas within each CDD region. Socioeconomic and environmental variables are used to calculate impact and vulnerability. We compared abilities of AutoGluon (an AutoML framework), Gaussian process, and deep Gaussian process to model the impact and vulnerability. We offer advice on how to avoid common pitfalls with artificial intelligence and machine-learning algorithms. Gaussian process is a promising approach for supporting the proposed framework. SHAP values provide an explanation for the model results and are consistent with current knowledge of vulnerability. Recommendations are made to implement the proposed framework quantitatively or qualitatively.
Collapse
Affiliation(s)
- Jean-Nicolas Côté
- Department of Applied Geomatics, Université de Sherbrooke, 2500, boulevard de l'Université, Sherbrooke J1K 2R1, Quebec, Canada.
| | - Mickaël Germain
- Department of Applied Geomatics, Université de Sherbrooke, 2500, boulevard de l'Université, Sherbrooke J1K 2R1, Quebec, Canada
| | - Elisabeth Levac
- Department of Environment, Agriculture and Geography, Bishop's University, 2600 College St., Sherbrooke J1M 1Z7, Quebec, Canada
| | - Eric Lavigne
- Environmental Health Science and Research Bureau, Health Canada, Ottawa, Ontario, Canada; School of Epidemiology & Public Health, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
10
|
Freeman NLB, Browder SE, McGinigle KL, Kosorok MR. Individualized treatment rule characterization via a value function surrogate. Biometrics 2024; 80:ujad012. [PMID: 38372403 PMCID: PMC10875523 DOI: 10.1093/biomtc/ujad012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 10/19/2023] [Accepted: 11/14/2023] [Indexed: 02/20/2024]
Abstract
Precision medicine is a promising framework for generating evidence to improve health and health care. Yet, a gap persists between the ever-growing number of statistical precision medicine strategies for evidence generation and implementation in real-world clinical settings, and the strategies for closing this gap will likely be context-dependent. In this paper, we consider the specific context of partial compliance to wound management among patients with peripheral artery disease. Using a Gaussian process surrogate for the value function, we show the feasibility of using Bayesian optimization to learn optimal individualized treatment rules. Further, we expand beyond the common precision medicine task of learning an optimal individualized treatment rule to the characterization of classes of individualized treatment rules and show how those findings can be translated into clinical contexts.
Collapse
Affiliation(s)
- Nikki L B Freeman
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, United States
| | - Sydney E Browder
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, United States
| | - Katharine L McGinigle
- Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, United States
| | - Michael R Kosorok
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, United States
| |
Collapse
|
11
|
Sawe SJ, Mugo R, Wilson-Barthes M, Osetinsky B, Chrysanthopoulou SA, Yego F, Mwangi A, Galárraga O. Gaussian process emulation to improve efficiency of computationally intensive multidisease models: a practical tutorial with adaptable R code. BMC Med Res Methodol 2024; 24:26. [PMID: 38281017 PMCID: PMC10821551 DOI: 10.1186/s12874-024-02149-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 01/11/2024] [Indexed: 01/29/2024] Open
Abstract
BACKGROUND The rapidly growing burden of non-communicable diseases (NCDs) among people living with HIV in sub-Saharan Africa (SSA) has expanded the number of multidisease models predicting future care needs and health system priorities. Usefulness of these models depends on their ability to replicate real-life data and be readily understood and applied by public health decision-makers; yet existing simulation models of HIV comorbidities are computationally expensive and require large numbers of parameters and long run times, which hinders their utility in resource-constrained settings. METHODS We present a novel, user-friendly emulator that can efficiently approximate complex simulators of long-term HIV and NCD outcomes in Africa. We describe how to implement the emulator via a tutorial based on publicly available data from Kenya. Emulator parameters relating to incidence and prevalence of HIV, hypertension and depression were derived from our own agent-based simulation model and other published literature. Gaussian processes were used to fit the emulator to simulator estimates, assuming presence of noise for design points. Bayesian posterior predictive checks and leave-one-out cross validation confirmed the emulator's descriptive accuracy. RESULTS In this example, our emulator resulted in a 13-fold (95% Confidence Interval (CI): 8-22) improvement in computing time compared to that of more complex chronic disease simulation models. One emulator run took 3.00 seconds (95% CI: 1.65-5.28) on a 64-bit operating system laptop with 8.00 gigabytes (GB) of Random Access Memory (RAM), compared to > 11 hours for 1000 simulator runs on a high-performance computing cluster with 1500 GBs of RAM. Pareto k estimates were < 0.70 for all emulations, which demonstrates sufficient predictive accuracy of the emulator. CONCLUSIONS The emulator presented in this tutorial offers a practical and flexible modelling tool that can help inform health policy-making in countries with a generalized HIV epidemic and growing NCD burden. Future emulator applications could be used to forecast the changing burden of HIV, hypertension and depression over an extended (> 10 year) period, estimate longer-term prevalence of other co-occurring conditions (e.g., postpartum depression among women living with HIV), and project the impact of nationally-prioritized interventions such as national health insurance schemes and differentiated care models.
Collapse
Affiliation(s)
- Sharon Jepkorir Sawe
- African Center of Excellence in Data Science, University of Rwanda, Kigali, Rwanda
| | - Richard Mugo
- Academic Model Providing Access to Healthcare, Eldoret, Kenya
| | - Marta Wilson-Barthes
- Department of Epidemiology, Brown University School of Public Health, Providence, RI, USA
| | - Brianna Osetinsky
- Department of Epidemiology and Public Health, Swiss Tropical and Public Health Institute, Basel, Switzerland
| | | | - Faith Yego
- Department of Health Policy Management & Human Nutrition, Moi University School Public Health, Eldoret, Kenya
| | - Ann Mwangi
- Academic Model Providing Access to Healthcare, Eldoret, Kenya
- Department of Mathematics, Physics & Computing, School of Science and Aerospace Studies, Moi University, Eldoret, Kenya
| | - Omar Galárraga
- Academic Model Providing Access to Healthcare, Eldoret, Kenya.
- Department of Health Services, Policy and Practice, and International Health Institute, Brown University School of Public Health, Providence, RI, USA.
| |
Collapse
|
12
|
Pan C, Tian Y, Zhou T, Li J. Personalized Prediction of Parkinson's Disease Progression Based on Deep Gaussian Processes. Stud Health Technol Inform 2024; 310:765-769. [PMID: 38269912 DOI: 10.3233/shti231068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Parkinson's disease is a chronic progressive neurodegenerative disease with highly heterogeneous symptoms and progression. It is helpful for patient management to establish a personalized model that integrates heterogeneous interpretation methods to predict disease progression. In the study, we propose a novel approach based on a multi-task learning framework to divide Parkinson's disease progression modeling into an unsupervised clustering task and a disease progression prediction task. On the one hand, the method can cluster patients with different progression trajectories and discover new progression patterns of Parkinson's disease. On the other hand, the discovery of new progression patterns helps to predict the future progression of Parkinson's disease markers more accurately through parameter sharing among multiple tasks. We discovered three different Parkinson's disease progression patterns and achieved better prediction performance (MAE=5.015, RMSE=7.284, r2=0.727) than previously proposed methods on Parkinson's Progression Markers Initiative datasets, which is a longitudinal cohort study with newly diagnosed Parkinson's disease.
Collapse
Affiliation(s)
- Changrong Pan
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Yu Tian
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Tianshu Zhou
- Research Center for Healthcare Data Science, Zhejiang Laboratory, Hangzhou, China
| | - Jingsong Li
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- Research Center for Healthcare Data Science, Zhejiang Laboratory, Hangzhou, China
| |
Collapse
|
13
|
Huber F, Inderka A, Steinhage V. Leveraging Remote Sensing Data for Yield Prediction with Deep Transfer Learning. Sensors (Basel) 2024; 24:770. [PMID: 38339487 PMCID: PMC10857376 DOI: 10.3390/s24030770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 01/10/2024] [Accepted: 01/22/2024] [Indexed: 02/12/2024]
Abstract
Remote sensing data represent one of the most important sources for automized yield prediction. High temporal and spatial resolution, historical record availability, reliability, and low cost are key factors in predicting yields around the world. Yield prediction as a machine learning task is challenging, as reliable ground truth data are difficult to obtain, especially since new data points can only be acquired once a year during harvest. Factors that influence annual yields are plentiful, and data acquisition can be expensive, as crop-related data often need to be captured by experts or specialized sensors. A solution to both problems can be provided by deep transfer learning based on remote sensing data. Satellite images are free of charge, and transfer learning allows recognition of yield-related patterns within countries where data are plentiful and transfers the knowledge to other domains, thus limiting the number of ground truth observations needed. Within this study, we examine the use of transfer learning for yield prediction, where the data preprocessing towards histograms is unique. We present a deep transfer learning framework for yield prediction and demonstrate its successful application to transfer knowledge gained from US soybean yield prediction to soybean yield prediction within Argentina. We perform a temporal alignment of the two domains and improve transfer learning by applying several transfer learning techniques, such as L2-SP, BSS, and layer freezing, to overcome catastrophic forgetting and negative transfer problems. Lastly, we exploit spatio-temporal patterns within the data by applying a Gaussian process. We are able to improve the performance of soybean yield prediction in Argentina by a total of 19% in terms of RMSE and 39% in terms of R2 compared to predictions without transfer learning and Gaussian processes. This proof of concept for advanced transfer learning techniques for yield prediction and remote sensing data in the form of histograms can enable successful yield prediction, especially in emerging and developing countries, where reliable data are usually limited.
Collapse
Affiliation(s)
| | | | - Volker Steinhage
- Department of Computer Science IV, University of Bonn, 53121 Bonn, Germany;
| |
Collapse
|
14
|
Wang Y, Zhao W, Ross A, You L, Wang H, Zhou X. Revealing chronic disease progression patterns using Gaussian process for stage inference. J Am Med Inform Assoc 2024; 31:396-405. [PMID: 38055638 PMCID: PMC10797260 DOI: 10.1093/jamia/ocad230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/06/2023] [Accepted: 11/20/2023] [Indexed: 12/08/2023] Open
Abstract
OBJECTIVE The early stages of chronic disease typically progress slowly, so symptoms are usually only noticed until the disease is advanced. Slow progression and heterogeneous manifestations make it challenging to model the transition from normal to disease status. As patient conditions are only observed at discrete timestamps with varying intervals, an incomplete understanding of disease progression and heterogeneity affects clinical practice and drug development. MATERIALS AND METHODS We developed the Gaussian Process for Stage Inference (GPSI) approach to uncover chronic disease progression patterns and assess the dynamic contribution of clinical features. We tested the ability of the GPSI to reliably stratify synthetic and real-world data for osteoarthritis (OA) in the Osteoarthritis Initiative (OAI), bipolar disorder (BP) in the Adolescent Brain Cognitive Development Study (ABCD), and hepatocellular carcinoma (HCC) in the UTHealth and The Cancer Genome Atlas (TCGA). RESULTS First, GPSI identified two subgroups of OA based on image features, where these subgroups corresponded to different genotypes, indicating the bone-remodeling and overweight-related pathways. Second, GPSI differentiated BP into two distinct developmental patterns and defined the contribution of specific brain region atrophy from early to advanced disease stages, demonstrating the ability of the GPSI to identify diagnostic subgroups. Third, HCC progression patterns were well reproduced in the two independent UTHealth and TCGA datasets. CONCLUSION Our study demonstrated that an unsupervised approach can disentangle temporal and phenotypic heterogeneity and identify population subgroups with common patterns of disease progression. Based on the differences in these features across stages, physicians can better tailor treatment plans and medications to individual patients.
Collapse
Affiliation(s)
- Yanfei Wang
- Center for Computational Systems Medicine, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States
| | - Weiling Zhao
- Center for Computational Systems Medicine, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States
| | - Angela Ross
- Center for Computational Systems Medicine, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States
| | - Lei You
- Center for Computational Systems Medicine, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States
| | - Hongyu Wang
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States
- Cizik School of Nursing, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States
| | - Xiaobo Zhou
- Center for Computational Systems Medicine, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States
| |
Collapse
|
15
|
Mao X, Chang YC, Zanos S, Lajoie G. Personalized inference for neurostimulation with meta-learning: a case study of vagus nerve stimulation. J Neural Eng 2024; 21:016004. [PMID: 38131193 DOI: 10.1088/1741-2552/ad17f4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 12/21/2023] [Indexed: 12/23/2023]
Abstract
Objective. Neurostimulation is emerging as treatment for several diseases of the brain and peripheral organs. Due to variability arising from placement of stimulation devices, underlying neuroanatomy and physiological responses to stimulation, it is essential that neurostimulation protocols are personalized to maximize efficacy and safety. Building such personalized protocols would benefit from accumulated information in increasingly large datasets of other individuals' responses.Approach. To address that need, we propose a meta-learning family of algorithms to conduct few-shot optimization of key fitting parameters of physiological and neural responses in new individuals. While our method is agnostic to neurostimulation setting, here we demonstrate its effectiveness on the problem of physiological modeling of fiber recruitment during vagus nerve stimulation (VNS). Using data from acute VNS experiments, the mapping between amplitudes of stimulus-evoked compound action potentials (eCAPs) and physiological responses, such as heart rate and breathing interval modulation, is inferred.Main results. Using additional synthetic data sets to complement experimental results, we demonstrate that our meta-learning framework is capable of directly modeling the physiology-eCAP relationship for individual subjects with much fewer individually queried data points than standard methods.Significance. Our meta-learning framework is general and can be adapted to many input-response neurostimulation mapping problems. Moreover, this method leverages information from growing data sets of past patients, as a treatment is deployed. It can also be combined with several model types, including regression, Gaussian processes with Bayesian optimization, and beyond.
Collapse
Affiliation(s)
- Ximeng Mao
- Mila-Quebec Artificial Intelligence Institute, 6666 St-Urbain, Montréal, QC H2S 3H1, Canada
- Department of Computer Science and Operations Research, University of Montréal, 2920 chemin de la Tour, Montréal, QC H3T 1J4, Canada
| | - Yao-Chuan Chang
- Institute of Bioelectronic Medicine, Feinstein Institutes for Medical Research, Manhasset, NY 11030, United States of America
- Medtronic, 710 Medtronic Parkway, Minneapolis, MN 55432, United States of America
| | - Stavros Zanos
- Institute of Bioelectronic Medicine, Feinstein Institutes for Medical Research, Manhasset, NY 11030, United States of America
| | - Guillaume Lajoie
- Mila-Quebec Artificial Intelligence Institute, 6666 St-Urbain, Montréal, QC H2S 3H1, Canada
- Department of Mathematics and Statistics, University of Montréal, 2920 chemin de la Tour, Montréal, QC H3T 1J4, Canada
- Canada CIFAR AI Chair, Toronto, ON M5G 1M1, Canada
| |
Collapse
|
16
|
Colliandre L, Muller C. Bayesian Optimization in Drug Discovery. Methods Mol Biol 2024; 2716:101-136. [PMID: 37702937 DOI: 10.1007/978-1-0716-3449-3_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/14/2023]
Abstract
Drug discovery deals with the search for initial hits and their optimization toward a targeted clinical profile. Throughout the discovery pipeline, the candidate profile will evolve, but the optimization will mainly stay a trial-and-error approach. Tons of in silico methods have been developed to improve and fasten this pipeline. Bayesian optimization (BO) is a well-known method for the determination of the global optimum of a function. In the last decade, BO has gained popularity in the early drug design phase. This chapter starts with the concept of black box optimization applied to drug design and presents some approaches to tackle it. Then it focuses on BO and explains its principle and all the algorithmic building blocks needed to implement it. This explanation aims to be accessible to people involved in drug discovery projects. A strong emphasis is made on the solutions to deal with the specific constraints of drug discovery. Finally, a large set of practical applications of BO is highlighted.
Collapse
|
17
|
Tao Z, Tanaka T, Zhao Q. Nonparametric tensor ring decomposition with scalable amortized inference. Neural Netw 2024; 169:431-441. [PMID: 37931474 DOI: 10.1016/j.neunet.2023.10.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 10/15/2023] [Accepted: 10/22/2023] [Indexed: 11/08/2023]
Abstract
Multi-dimensional data are common in many applications, such as videos and multi-variate time series. While tensor decomposition (TD) provides promising tools for analyzing such data, there still remains several limitations. First, traditional TDs assume multi-linear structures of the latent embeddings, which greatly limits their expressive power. Second, TDs cannot be straightforwardly applied to datasets with massive samples. To address these issues, we propose a nonparametric TD with amortized inference networks. Specifically, we establish a non-linear extension of tensor ring decomposition, using neural networks, to model complex latent structures. To jointly model the cross-sample correlations and physical structures, a matrix Gaussian process (GP) prior is imposed over the core tensors. From learning perspective, we develop a VAE-like amortized inference network to infer the posterior of core tensors corresponding to new tensor data, which enables TDs to be applied to large datasets. Our model can be also viewed as a kind of decomposition of VAE, which can additionally capture hidden tensor structure and enhance the expressiveness power. Finally, we derive an evidence lower bound such that a scalable optimization algorithm is developed. The advantages of our method have been evaluated extensively by data imputation on the Healing MNIST dataset and four multi-variate time series data.
Collapse
Affiliation(s)
- Zerui Tao
- Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, 184-8588, Tokyo, Japan; RIKEN Center for Advanced Intelligence Project (AIP), 103-0027, Tokyo, Japan.
| | - Toshihisa Tanaka
- Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, 184-8588, Tokyo, Japan; RIKEN Center for Advanced Intelligence Project (AIP), 103-0027, Tokyo, Japan.
| | - Qibin Zhao
- Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, 184-8588, Tokyo, Japan; RIKEN Center for Advanced Intelligence Project (AIP), 103-0027, Tokyo, Japan.
| |
Collapse
|
18
|
Mutiso F, Li H, Pearce JL, Benjamin-Neelon SE, Mueller NT, Neelon B. Bayesian kernel machine regression for count data: modelling the association between social vulnerability and COVID-19 deaths in South Carolina. J R Stat Soc Ser C Appl Stat 2024; 73:257-274. [PMID: 38222066 PMCID: PMC10782459 DOI: 10.1093/jrsssc/qlad094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 04/24/2023] [Accepted: 09/19/2023] [Indexed: 01/16/2024]
Abstract
The COVID-19 pandemic created an unprecedented global health crisis. Recent studies suggest that socially vulnerable communities were disproportionately impacted, although findings are mixed. To quantify social vulnerability in the US, many studies rely on the Social Vulnerability Index (SVI), a county-level measure comprising 15 census variables. Typically, the SVI is modelled in an additive manner, which may obscure non-linear or interactive associations, further contributing to inconsistent findings. As a more robust alternative, we propose a negative binomial Bayesian kernel machine regression (BKMR) model to investigate dynamic associations between social vulnerability and COVID-19 death rates, thus extending BKMR to the count data setting. The model produces a 'vulnerability effect' that quantifies the impact of vulnerability on COVID-19 death rates in each county. The method can also identify the relative importance of various SVI variables and make future predictions as county vulnerability profiles evolve. To capture spatio-temporal heterogeneity, the model incorporates spatial effects, county-level covariates, and smooth temporal functions. For Bayesian computation, we propose a tractable data-augmented Gibbs sampler. We conduct a simulation study to highlight the approach and apply the method to a study of COVID-19 deaths in the US state of South Carolina during the 2021 calendar year.
Collapse
Affiliation(s)
- Fedelis Mutiso
- Division of Biostatistics, Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Hong Li
- Division of Biostatistics, Department of Public Health Sciences, University of California, Davis, CA, USA
| | - John L Pearce
- Division of Environmental Health, Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Sara E Benjamin-Neelon
- Department of Health, Behavior and Society, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Noel T Mueller
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Brian Neelon
- Division of Biostatistics, Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
19
|
Sbragio R, Filho OR, Martins MR. Methodology for the estimation of an oil spill origin: Analysis of the 2019 Brazilian coast oil spill. Mar Pollut Bull 2023; 197:115676. [PMID: 37897965 DOI: 10.1016/j.marpolbul.2023.115676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/10/2023] [Accepted: 10/13/2023] [Indexed: 10/30/2023]
Abstract
This research presents a procedure for determining the origin of marine pollution through the use of a time-direct trajectory modeling, associated with a Kriging metamodel technique and Monte Carlo random sampling. These methods were applied to a real case, specifically the oil spill that affected the Brazilian coast in the second half of 2019 and early 2020. A total of 140 trajectories, defined by the geographical coordinates of the origin and the spill date, were generated through Latin Hypercube Sampling and simulated using the PyGNOME model to construct the Kriging metamodel. The metamodel demonstrated cost-effectiveness by efficiently simulating numerous input data combinations which were compared and optimized based on available real data regarding temporal and spatial pollution distribution.
Collapse
Affiliation(s)
- Ricardo Sbragio
- University of São Paulo, Naval Architecture and Ocean Engineering Department, Analysis, Evaluation and Risk Management Laboratory - LabRisco, Av. Prof. Mello Moraes, 2231, Cidade Universitária, São Paulo 05508-030, SP, Brazil.
| | | | - Marcelo Ramos Martins
- University of São Paulo, Naval Architecture and Ocean Engineering Department, Analysis, Evaluation and Risk Management Laboratory - LabRisco, Av. Prof. Mello Moraes, 2231, Cidade Universitária, São Paulo 05508-030, SP, Brazil.
| |
Collapse
|
20
|
Rule ME, Chaudhuri‐Vayalambrone P, Krstulovic M, Bauza M, Krupic J, O'Leary T. Variational log-Gaussian point-process methods for grid cells. Hippocampus 2023; 33:1235-1251. [PMID: 37749821 PMCID: PMC10962565 DOI: 10.1002/hipo.23577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 07/17/2023] [Accepted: 08/30/2023] [Indexed: 09/27/2023]
Abstract
We present practical solutions to applying Gaussian-process (GP) methods to calculate spatial statistics for grid cells in large environments. GPs are a data efficient approach to inferring neural tuning as a function of time, space, and other variables. We discuss how to design appropriate kernels for grid cells, and show that a variational Bayesian approach to log-Gaussian Poisson models can be calculated quickly. This class of models has closed-form expressions for the evidence lower-bound, and can be estimated rapidly for certain parameterizations of the posterior covariance. We provide an implementation that operates in a low-rank spatial frequency subspace for further acceleration, and demonstrate these methods on experimental data.
Collapse
Affiliation(s)
| | | | - Marino Krstulovic
- Department of Physiology, Development and NeuroscienceUniversity of CambridgeCambridgeUK
| | - Marius Bauza
- Sainsbury Wellcome Centre, University College LondonLondonUK
| | - Julija Krupic
- Department of Physiology, Development and NeuroscienceUniversity of CambridgeCambridgeUK
| | | |
Collapse
|
21
|
G K AV, Gogoi G, Kachappilly MC, Rangarajan A, Pandya HJ. Label-free multimodal electro-thermo-mechanical (ETM) phenotyping as a novel biomarker to differentiate between normal, benign, and cancerous breast biopsy tissues. J Biol Eng 2023; 17:68. [PMID: 37957665 PMCID: PMC10644568 DOI: 10.1186/s13036-023-00388-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Technologies for quick and label-free diagnosis of malignancies from breast tissues have the potential to be a significant adjunct to routine diagnostics. The biophysical phenotypes of breast tissues, such as its electrical, thermal, and mechanical properties (ETM), have the potential to serve as novel markers to differentiate between normal, benign, and malignant tissue. RESULTS We report a system-of-biochips (SoB) integrated into a semi-automated mechatronic system that can characterize breast biopsy tissues using electro-thermo-mechanical sensing. The SoB, fabricated on silicon using microfabrication techniques, can measure the electrical impedance (Z), thermal conductivity (K), mechanical stiffness (k), and viscoelastic stress relaxation (%R) of the samples. The key sensing elements of the biochips include interdigitated electrodes, resistance temperature detectors, microheaters, and a micromachined diaphragm with piezoresistive bridges. Multi-modal ETM measurements performed on formalin-fixed tumour and adjacent normal breast biopsy samples from N = 14 subjects were able to differentiate between invasive ductal carcinoma (malignant), fibroadenoma (benign), and adjacent normal (healthy) tissues with a root mean square error of 0.2419 using a Gaussian process classifier. Carcinoma tissues were observed to have the highest mean impedance (110018.8 ± 20293.8 Ω) and stiffness (0.076 ± 0.009 kNm-1) and the lowest thermal conductivity (0.189 ± 0.019 Wm-1 K-1) amongst the three groups, while the fibroadenoma samples had the highest percentage relaxation in normalized load (47.8 ± 5.12%). CONCLUSIONS The work presents a novel strategy to characterize the multi-modal biophysical phenotype of breast biopsy tissues to aid in cancer diagnosis from small-sized tumour samples. The methodology envisions to supplement the existing technology gap in the analysis of breast tissue samples in the pathology laboratories to aid the diagnostic workflow.
Collapse
Affiliation(s)
- Anil Vishnu G K
- Center for BioSystems Science and Engineering, Indian Institute of Science, Bangalore, Karnataka, 560012, India
| | - Gayatri Gogoi
- Department of Pathology, Assam Medical College, Dibrugarh, Assam, 786002, India
| | - Midhun C Kachappilly
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, Karnataka, 560012, India
| | - Annapoorni Rangarajan
- Department of Developmental Biology and Genetics, Indian Institute of Science, Bangalore, Karnataka, 560012, India
| | - Hardik J Pandya
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, Karnataka, 560012, India.
- Centre for Product Design and Manufacturing, Indian Institute of Science, Bangalore, Karnataka, 560012, India.
| |
Collapse
|
22
|
Liu Y, Ziatdinov MA, Vasudevan RK, Kalinin SV. Explainability and human intervention in autonomous scanning probe microscopy. Patterns (N Y) 2023; 4:100858. [PMID: 38035198 PMCID: PMC10682748 DOI: 10.1016/j.patter.2023.100858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 07/26/2023] [Accepted: 09/15/2023] [Indexed: 12/02/2023]
Abstract
The broad adoption of machine learning (ML)-based autonomous experiments (AEs) in material characterization and synthesis requires strategies development for understanding and intervention in the experimental workflow. Here, we introduce and realize a post-experimental analysis strategy for deep kernel learning-based autonomous scanning probe microscopy. This approach yields real-time and post-experimental indicators for the progression of an active learning process interacting with an experimental system. We further illustrate how this approach can be applied to human-in-the-loop AEs, where human operators make high-level decisions at high latencies setting the policies for AEs, and the ML algorithm performs low-level, fast decisions. The proposed approach is universal and can be extended to other techniques and applications such as combinatorial library analysis.
Collapse
Affiliation(s)
- Yongtao Liu
- Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
| | - Maxim A. Ziatdinov
- Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
- Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
| | - Rama K. Vasudevan
- Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA
| | - Sergei V. Kalinin
- Department of Materials Science and Engineering, University of Tennessee, Knoxville, TN 37996, USA
| |
Collapse
|
23
|
Xu S, Yang X, Zhang S, Zheng X, Zheng F, Liu Y, Zhang H, Ye Q, Li L. Machine learning models for orthokeratology lens fitting and axial length prediction. Ophthalmic Physiol Opt 2023; 43:1462-1468. [PMID: 37574762 DOI: 10.1111/opo.13212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/15/2023]
Abstract
PURPOSE In order to improve the efficiency of orthokeratology (OK) lens fitting and predict the axial length after 1 year of OK lens wear, machine learning models were proposed. METHODS Clinical data from 1302 myopic subjects were collected retrospectively, and two machine learning models were implemented. Demographic and corneal topographic data were collected as input variables. The output variables were the parameters of the OK lens and the axial length after 1 year. Eighty percent of input variables was used as the training set and the remaining 20% was used as the validation set. The first alignment curve (AC1) of the OK lenses, deduced using machine learning models and formula calculation, were compared. Multiple regression models (support vector machine, Gaussian process, decision tree and random forest) were used to predict the axial length after 1 year. In addition, we classified data based on lens brand, and carried out more detailed parameter fitting and analysis for spherical and toric OK lenses. RESULTS The OK lens fitting model showed higher (R2 = 0.93) and lower errors (mean absolute error [MAE] = 0.19, mean square error [MSE] = 0.09) when predicting AC1, compared with the formula calculation (R2 = 0.66, MAE = 0.44, MSE = 0.25). The machine learning model still had high R2 values ranging from 0.91 to 0.96 when considering the brand and design of the OK lenses. Further, the R2 value for the axial length prediction model was 0.94, which indicated that the machine learning model had high accuracy and good robustness. CONCLUSION The OK lens fitting model and the axial length prediction model played an important role in guiding OK lens fitting, with high accuracy and robustness in prediction performance.
Collapse
Affiliation(s)
- Shuai Xu
- Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics and TEDA Applied Physics, Nankai University, Tianjin, China
| | - Xiaoyan Yang
- Tianjin Eye Hospital Optometric Center, Tianjin, China
- Tianjin Eye Hospital, Tianjin, China
- Nankai University Affiliated Eye Hospital, Nankai University, Tianjin, China
| | - Shuxian Zhang
- Tianjin Eye Hospital Optometric Center, Tianjin, China
- Tianjin Eye Hospital, Tianjin, China
- Nankai University Affiliated Eye Hospital, Nankai University, Tianjin, China
| | - Xuan Zheng
- Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics and TEDA Applied Physics, Nankai University, Tianjin, China
| | - Fang Zheng
- Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics and TEDA Applied Physics, Nankai University, Tianjin, China
| | - Yin Liu
- School of Medicine, Nankai University, Tianjin, China
| | - Hanyu Zhang
- School of Medicine, Nankai University, Tianjin, China
| | - Qing Ye
- Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics and TEDA Applied Physics, Nankai University, Tianjin, China
| | - Lihua Li
- Tianjin Eye Hospital Optometric Center, Tianjin, China
- Tianjin Eye Hospital, Tianjin, China
- Nankai University Affiliated Eye Hospital, Nankai University, Tianjin, China
| |
Collapse
|
24
|
Simeone D, Lenatti M, Lagoa C, Keshavjee K, Guergachi A, Dabbene F, Paglialonga A. Multi-Input Multi-Output Dynamic Modelling of Type 2 Diabetes Progression. Stud Health Technol Inform 2023; 309:228-232. [PMID: 37869847 DOI: 10.3233/shti230784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2023]
Abstract
Type 2 Diabetes Mellitus (T2D) is a chronic health condition that affects millions of people globally. Early identification of risk can support preventive intervention and therefore slow down disease progression. Risk characterization is also necessary to monitor the mechanisms behind the pathology through the analysis of the interrelationships between the predictors and their time course. In this work, a multi-input multi-output Gaussian Process model is proposed to describe the evolution of different biomarkers in patients who will/will not develop T2D considering the interdependencies between outputs. The preliminary results obtained suggest that the trends in biomarkers captured by the model are coherent with the literature and with real-world data, demonstrating the value of multi-input multi-output approaches. In future developments, the proposed method could be applied to assess how the biomarkers evolve and interact with each other in groups of patients having in common one or more risk factors.
Collapse
Affiliation(s)
- Davide Simeone
- CNR-IEIIT, Milan, Italy
- Politecnico di Milano, Milan, Italy
| | | | | | | | - Aziz Guergachi
- Toronto Metropolitan University, Toronto, Canada
- York University, Toronto, Canada
| | | | | |
Collapse
|
25
|
Fazel M, Jazani S, Scipioni L, Vallmitjana A, Zhu S, Gratton E, Digman MA, Pressé S. Building Fluorescence Lifetime Maps Photon-by-Photon by Leveraging Spatial Correlations. ACS Photonics 2023; 10:3558-3569. [PMID: 38406580 PMCID: PMC10890823 DOI: 10.1021/acsphotonics.3c00595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Fluorescence lifetime imaging microscopy (FLIM) has become a standard tool in the quantitative characterization of subcellular environments. However, quantitative FLIM analyses face several challenges. First, spatial correlations between pixels are often ignored as signal from individual pixels is analyzed independently thereby limiting spatial resolution. Second, existing methods deduce photon ratios instead of absolute lifetime maps. Next, the number of fluorophore species contributing to the signal is unknown, while excited state lifetimes with <1 ns difference are difficult to discriminate. Finally, existing analyses require high photon budgets and often cannot rigorously propagate experimental uncertainty into values over lifetime maps and number of species involved. To overcome all of these challenges simultaneously and self-consistently at once, we propose the first doubly nonparametric framework. That is, we learn the number of species (using Beta-Bernoulli process priors) and absolute maps of these fluorophore species (using Gaussian process priors) by leveraging information from pulses not leading to observed photon. We benchmark our framework using a broad range of synthetic and experimental data and demonstrate its robustness across a number of scenarios including cases where we recover lifetime differences between species as small as 0.3 ns with merely 1000 photons.
Collapse
Affiliation(s)
- Mohamadreza Fazel
- Center for Biological Physics and Department of Physics, Arizona State University, Tempe, Arizona 85287, United States
| | - Sina Jazani
- Center for Biological Physics and Department of Physics, Arizona State University, Tempe, Arizona 85287, United States
| | - Lorenzo Scipioni
- Department of Biomedical Engineering, University of California Irvine, Irvine, California 92697, United States; Laboratory of Fluorescence Dynamics, The Henry Samueli School of Engineering, University of California, Irvine, California 92697, United States
| | - Alexander Vallmitjana
- Department of Biomedical Engineering, University of California Irvine, Irvine, California 92697, United States; Laboratory of Fluorescence Dynamics, The Henry Samueli School of Engineering, University of California, Irvine, California 92697, United States
| | - Songning Zhu
- Department of Biomedical Engineering, University of California Irvine, Irvine, California 92697, United States; Laboratory of Fluorescence Dynamics, The Henry Samueli School of Engineering, University of California, Irvine, California 92697, United States
| | - Enrico Gratton
- Department of Biomedical Engineering, University of California Irvine, Irvine, California 92697, United States; Laboratory of Fluorescence Dynamics, The Henry Samueli School of Engineering, University of California, Irvine, California 92697, United States
| | - Michelle A Digman
- Department of Biomedical Engineering, University of California Irvine, Irvine, California 92697, United States; Laboratory of Fluorescence Dynamics, The Henry Samueli School of Engineering, University of California, Irvine, California 92697, United States
| | - Steve Pressé
- Center for Biological Physics and Department of Physics, Arizona State University, Tempe, Arizona 85287, United States; School of Molecular Science, Arizona State University, Tempe, Arizona 85287, United States
| |
Collapse
|
26
|
Jin Z, Kang J, Yu T. Bayesian nonparametric method for genetic dissection of brain activation region. Front Neurosci 2023; 17:1235321. [PMID: 37920300 PMCID: PMC10618557 DOI: 10.3389/fnins.2023.1235321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/26/2023] [Indexed: 11/04/2023] Open
Abstract
Biological evidence indicewates that the brain atrophy can be involved at the onset of neuropathological pathways of Alzheimer's disease. However, there is lack of formal statistical methods to perform genetic dissection of brain activation phenotypes such as shape and intensity. To this end, we propose a Bayesian hierarchical model which consists of two levels of hierarchy. At level 1, we develop a Bayesian nonparametric level set (BNLS) model for studying the brain activation region shape. At level 2, we construct a regression model to select genetic variants that are strongly associated with the brain activation intensity, where a spike-and-slab prior and a Gaussian prior are chosen for feature selection. We develop efficient posterior computation algorithms based on the Markov chain Monte Carlo (MCMC) method. We demonstrate the advantages of the proposed method via extensive simulation studies and analyses of imaging genetics data in the Alzheimer's disease neuroimaging initiative (ADNI) study.
Collapse
Affiliation(s)
- Zhuxuan Jin
- Department of Biostatistics and Bioinformatics, Emory University, Atlanta, GA, United States
| | - Jian Kang
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, United States
| | - Tianwei Yu
- School of Data Science, Chinese University of Hong Kong - Shenzhen, Shenzhen, China
- Guangdong Provincial Key Laboratory of Big Data Computing, Shenzhen, China
| |
Collapse
|
27
|
Assaf E, Buckley J, Feldheim N. An asymptotic formula for the variance of the number of zeroes of a stationary Gaussian process. Probab Theory Relat Fields 2023; 187:999-1036. [PMID: 37941811 PMCID: PMC10628032 DOI: 10.1007/s00440-023-01218-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 05/10/2023] [Accepted: 06/16/2023] [Indexed: 11/10/2023]
Abstract
We study the variance of the number of zeroes of a stationary Gaussian process on a long interval. We give a simple asymptotic description under mild mixing conditions. This allows us to characterise minimal and maximal growth. We show that a small (symmetrised) atom in the spectral measure at a special frequency does not affect the asymptotic growth of the variance, while an atom at any other frequency results in maximal growth.
Collapse
|
28
|
Zhu K, Huang C, Li S, Lin X. Physics-informed Gaussian process for tool wear prediction. ISA Trans 2023:S0019-0578(23)00414-7. [PMID: 37770369 DOI: 10.1016/j.isatra.2023.09.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/24/2023] [Accepted: 09/09/2023] [Indexed: 09/30/2023]
Abstract
The tool wear monitoring (TWM) system plays an increasingly important role to ensure high quality finishing and system safety in advanced CNC machining process. The pure data-based TWM approaches generally needs to develop complex machine learning models and require massive sensory data to learn the models to reach high monitoring accuracy, while the physics-based tool wear models are simple but hard to adapt to varied working conditions. In order to incorporate the benefits of both methods, a novel physics-informed Gaussian process model is developed to predict the tool wear. Different from the traditional approaches, three tool wear physical models are introduced to develop the physics-informed Gaussian process regression (PB-GPR) model. The wear model is applied to constrain the mean function of the Gaussian process, so that the PB-GPR is more in line with the actual tool wear. At the same time, the model can initiate small data training to meet limited tool wear labels in practice, and then update the model with new measurements. Multi-sensor signals are collected and multi-domain features are extracted for the model learning. The proposed approach is validated from high speed milling experiments. The results show a significant performance improvement including tool wear prediction accuracy and robustness in extrapolation compared to the conventional machine learning methods.
Collapse
Affiliation(s)
- Kunpeng Zhu
- Institute of Precision Manufacturing, School of Machinery and Automation, Wuhan University of Science and Technology, Wuhan 430081, China; Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Changzhou 213164, China.
| | - Chengyi Huang
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Changzhou 213164, China.
| | - Si Li
- Institute of Precision Manufacturing, School of Machinery and Automation, Wuhan University of Science and Technology, Wuhan 430081, China.
| | - Xin Lin
- Institute of Precision Manufacturing, School of Machinery and Automation, Wuhan University of Science and Technology, Wuhan 430081, China.
| |
Collapse
|
29
|
Sun W, Liu J, Hu J, Jin J, Siasoco K, Zhou R, Mccoy R. Adaptive restraint design for a diverse population through machine learning. Front Public Health 2023; 11:1202970. [PMID: 37637800 PMCID: PMC10448517 DOI: 10.3389/fpubh.2023.1202970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Accepted: 07/27/2023] [Indexed: 08/29/2023] Open
Abstract
Objective Using population-based simulations and machine-learning algorithms to develop an adaptive restraint system that accounts for occupant anthropometry variations to further enhance safety balance throughout the whole population. Methods Two thousand MADYMO full frontal impact crash simulations at 35 mph using two validated vehicle/restraint models representing a sedan and an SUV along with a parametric occupant model were conducted based on the maximal projection design of experiments, which considers varying occupant covariates (sex, stature, and body mass index) and vehicle restraint design variables (three for airbag, three for safety belt, and one for knee bolster). A Gaussian-process-based surrogate model was trained to rapidly predict occupant injury risks and the associated uncertainties. An optimization framework was formulated to seek the optimal adaptive restraint design policy that minimizes the population injury risk across a wide range of occupant sizes and shapes while maintaining a low difference in injury risks among different occupant subgroups. The effectiveness of the proposed method was tested by comparing the population-wise injury risks under the adaptive design policy and the traditional state-of-the-art design. Results Compared to the traditional state-of-the-art design for midsize males, the optimal design policy shows the potential to further reduce the joint injury risk (combining head, chest, and lower extremity injury risks) among the whole population in the sedan and SUV models. Specifically, the two subgroups of vulnerable occupants including tall obese males and short obese females had higher reductions in injury risks. Conclusions This study lays out a method to adaptively adjust vehicle restraint systems to improve safety balance. This is the first study where population-based crash simulations and machine-learning methods are used to optimize adaptive restraint designs for a diverse population. Nevertheless, this study shows the high injury risks associated with obese and female occupants, which can be mitigated via restraint adaptability.
Collapse
Affiliation(s)
- Wenbo Sun
- University of Michigan Transportation Research Institute (UMTRI), College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Jiacheng Liu
- Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Jingwen Hu
- University of Michigan Transportation Research Institute (UMTRI), College of Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Judy Jin
- Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, United States
| | | | | | | |
Collapse
|
30
|
Park JH, Lu AY, Tavakoli MM, Kim NY, Chiu MH, Liu H, Zhang T, Wang Z, Wang J, Martins LGP, Luo Z, Chi M, Miao J, Kong J. Revealing Variable Dependences in Hexagonal Boron Nitride Synthesis via Machine Learning. Nano Lett 2023. [PMID: 37196055 DOI: 10.1021/acs.nanolett.2c04624] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Wafer-scale monolayer two-dimensional (2D) materials have been realized by epitaxial chemical vapor deposition (CVD) in recent years. To scale up the synthesis of 2D materials, a systematic analysis of how the growth dynamics depend on the growth parameters is essential to unravel its mechanisms. However, the studies of CVD-grown 2D materials mostly adopted the control variate method and considered each parameter as an independent variable, which is not comprehensive for 2D materials growth optimization. Herein, we synthesized a representative 2D material, monolayer hexagonal boron nitride (hBN), on single-crystalline Cu (111) by epitaxial chemical vapor deposition and varied the growth parameters to regulate the hBN domain sizes. Furthermore, we explored the correlation between two growth parameters and provided the growth windows for large flake sizes by the Gaussian process. This new analysis approach based on machine learning provides a more comprehensive understanding of the growth mechanism for 2D materials.
Collapse
Affiliation(s)
- Ji-Hoon Park
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - Ang-Yu Lu
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - Mohammad Mahdi Tavakoli
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - Na Yeon Kim
- Department of Physics and Astronomy and California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California 90095, United States
| | - Ming-Hui Chiu
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - Hongwei Liu
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
- Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong 999077, China
| | - Tianyi Zhang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - Zhien Wang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - Jiangtao Wang
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | | | - Zhengtang Luo
- Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong 999077, China
| | - Miaofang Chi
- Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, United States
| | - Jianwei Miao
- Department of Physics and Astronomy and California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California 90095, United States
| | - Jing Kong
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| |
Collapse
|
31
|
Preuss R, von Toussaint U. Outlier-Robust Surrogate Modeling of Ion-Solid Interaction Simulations. Entropy (Basel) 2023; 25:e25040685. [PMID: 37190472 PMCID: PMC10137831 DOI: 10.3390/e25040685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 04/14/2023] [Accepted: 04/17/2023] [Indexed: 05/17/2023]
Abstract
Data for complex plasma-wall interactions require long-running and expensive computer simulations. Furthermore, the number of input parameters is large, which results in low coverage of the (physical) parameter space. Unpredictable occasions of outliers create a need to conduct the exploration of this multi-dimensional space using robust analysis tools. We restate the Gaussian process (GP) method as a Bayesian adaptive exploration method for establishing surrogate surfaces in the variables of interest. On this basis, we expand the analysis by the Student-t process (TP) method in order to improve the robustness of the result with respect to outliers. The most obvious difference between both methods shows up in the marginal likelihood for the hyperparameters of the covariance function, where the TP method features a broader marginal probability distribution in the presence of outliers. Eventually, we provide first investigations, with a mixture likelihood of two Gaussians within a Gaussian process ansatz for describing either outlier or non-outlier behavior. The parameters of the two Gaussians are set such that the mixture likelihood resembles the shape of a Student-t likelihood.
Collapse
Affiliation(s)
- Roland Preuss
- Max-Planck-Institut für Plasmaphysik, 85748 Garching, Germany
| | | |
Collapse
|
32
|
Li MY, Callaway F, Thompson WD, Adams RP, Griffiths TL. Learning to Learn Functions. Cogn Sci 2023; 47:e13262. [PMID: 37051879 DOI: 10.1111/cogs.13262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 01/12/2023] [Accepted: 01/17/2023] [Indexed: 04/14/2023]
Abstract
Humans can learn complex functional relationships between variables from small amounts of data. In doing so, they draw on prior expectations about the form of these relationships. In three experiments, we show that people learn to adjust these expectations through experience, learning about the likely forms of the functions they will encounter. Previous work has used Gaussian processes-a statistical framework that extends Bayesian nonparametric approaches to regression-to model human function learning. We build on this work, modeling the process of learning to learn functions as a form of hierarchical Bayesian inference about the Gaussian process hyperparameters.
Collapse
Affiliation(s)
- Michael Y Li
- Department of Computer Science, Stanford University
| | | | | | - Ryan P Adams
- Department of Computer Science, Princeton University
| | - Thomas L Griffiths
- Department of Psychology, Princeton University
- Department of Computer Science, Princeton University
| |
Collapse
|
33
|
Fouefack JR, Borotikar B, Lüthi M, Douglas TS, Burdin V, Mutsvangwa TEM. Dynamic multi feature-class Gaussian process models. Med Image Anal 2023; 85:102730. [PMID: 36586395 DOI: 10.1016/j.media.2022.102730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 08/30/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022]
Abstract
In model-based medical image analysis, three relevant features are the shape of structures of interest, their relative pose, and image intensity profiles representative of some physical properties. Often, these features are modelled separately through statistical models by decomposing the object's features into a set of basis functions through principal geodesic analysis or principal component analysis. However, analysing articulated objects in an image using independent single object models may lead to large uncertainties and impingement, especially around organ boundaries. Questions that come to mind are the feasibility of building a unique model that combines all three features of interest in the same statistical space, and what advantages can be gained for image analysis. This study presents a statistical modelling method for automatic analysis of shape, pose and intensity features in medical images which we call the Dynamic multi feature-class Gaussian process models (DMFC-GPM). The DMFC-GPM is a Gaussian process (GP)-based model with a shared latent space that encodes linear and non-linear variations. Our method is defined in a continuous domain with a principled way to represent shape, pose and intensity feature-classes in a linear space, based on deformation fields. A deformation field-based metric is adapted in the method for modelling shape and intensity variation as well as for comparing rigid transformations (pose). Moreover, DMFC-GPMs inherit properties intrinsic to GPs including marginalisation and regression. Furthermore, they allow for adding additional pose variability on top of those obtained from the image acquisition process; what we term as permutation modelling. For image analysis tasks using DMFC-GPMs, we adapt Metropolis-Hastings algorithms making the prediction of features fully probabilistic. We validate the method using controlled synthetic data and we perform experiments on bone structures from CT images of the shoulder to illustrate the efficacy of the model for pose and shape prediction. The model performance results suggest that this new modelling paradigm is robust, accurate, accessible, and has potential applications in a multitude of scenarios including the management of musculoskeletal disorders, clinical decision making and image processing.
Collapse
|
34
|
Rabbani A, Gao H, Lazarus A, Dalton D, Ge Y, Mangion K, Berry C, Husmeier D. Image-based estimation of the left ventricular cavity volume using deep learning and Gaussian process with cardio-mechanical applications. Comput Med Imaging Graph 2023; 106:102203. [PMID: 36848766 DOI: 10.1016/j.compmedimag.2023.102203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 11/26/2022] [Accepted: 02/17/2023] [Indexed: 02/27/2023]
Abstract
In this investigation, an image-based method has been developed to estimate the volume of the left ventricular cavity using cardiac magnetic resonance (CMR) imaging data. Deep learning and Gaussian processes have been applied to bring the estimations closer to the cavity volumes manually extracted. CMR data from 339 patients and healthy volunteers have been used to train a stepwise regression model that can estimate the volume of the left ventricular cavity at the beginning and end of diastole. We have decreased the root mean square error (RMSE) of cavity volume estimation approximately from 13 to 8 ml compared to the common practice in the literature. Considering the RMSE of manual measurements is approximately 4 ml on the same dataset, 8 ml of error is notable for a fully automated estimation method, which needs no supervision or user-hours once it has been trained. Additionally, to demonstrate a clinically relevant application of automatically estimated volumes, we inferred the passive material properties of the myocardium given the volume estimates using a well-validated cardiac model. These material properties can be further used for patient treatment planning and diagnosis.
Collapse
Affiliation(s)
- Arash Rabbani
- School of Mathematics & Statistics, University of Glasgow, Glasgow G12 8QQ, United Kingdom; School of Computing, University of Leeds, Leeds LS2 9JT, United Kingdom.
| | - Hao Gao
- School of Mathematics & Statistics, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Alan Lazarus
- School of Mathematics & Statistics, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - David Dalton
- School of Mathematics & Statistics, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Yuzhang Ge
- School of Mathematics & Statistics, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Kenneth Mangion
- School of Cardiovascular & Metabolic Health, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Colin Berry
- School of Cardiovascular & Metabolic Health, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Dirk Husmeier
- School of Mathematics & Statistics, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| |
Collapse
|
35
|
Hasan MM, Rogiers B, Laloy E, Camps J, Rutten J, Huysmans M. Estimation of soil radioactivity-depth profiles using Bayesian inversion of borehole gamma spectrometry data. J Environ Radioact 2023; 257:107077. [PMID: 36436252 DOI: 10.1016/j.jenvrad.2022.107077] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 11/13/2022] [Accepted: 11/17/2022] [Indexed: 06/16/2023]
Abstract
Inversion of in situ borehole gamma spectrometry data is a faster and relatively less laborious method for calculating the vertical distribution of radioactivity in soil than conventional soil sampling method. However, the efficiency calculation of a detector for such measurements is a challenging task due to spatial and temporal variation of the soil properties and other measurement parameters. In this study, the sensitivity of different soil characteristics and measurement parameters on simulated efficiencies for a 662 keV photon peak were investigated. In addition, a Bayesian data inversion with a Gaussian process model was used to calculate the activity concentration of 137Cs and its uncertainty considering the sources of uncertainty identified during the sensitivity analysis, including soil density, borehole radius, and the uncertainty in detector position in the borehole. Several soil samples were also collected from the borehole and surrounding area, and 137Cs activity concentration was measured to compare with the inversion results. The calculated 137Cs activity concentrations agree well with those obtained from soil samples. Therefore, it can be concluded that the vertical radioactivity distribution can be calculated using the probabilistic method using in situ gamma spectrometric measurements.
Collapse
Affiliation(s)
- Md Moudud Hasan
- SCK CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400, Mol, Belgium; Department of Hydrology and Hydraulic Engineering, Vrije Universiteit Brussel (VUB), Pleinlaan 2, BE-1050, Brussels, Belgium.
| | - Bart Rogiers
- SCK CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400, Mol, Belgium
| | - Eric Laloy
- SCK CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400, Mol, Belgium
| | - Johan Camps
- SCK CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400, Mol, Belgium
| | - Jos Rutten
- SCK CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400, Mol, Belgium
| | - Marijke Huysmans
- Department of Hydrology and Hydraulic Engineering, Vrije Universiteit Brussel (VUB), Pleinlaan 2, BE-1050, Brussels, Belgium
| |
Collapse
|
36
|
Zhang X, Zhang G, Zhang D, Zhang L, Qian F. Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace. Materials (Basel) 2023; 16:1164. [PMID: 36770171 PMCID: PMC9920012 DOI: 10.3390/ma16031164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/21/2023] [Accepted: 01/28/2023] [Indexed: 06/18/2023]
Abstract
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to the complex structure, long transportation distance of raw materials, and high cost. To overcome these issues, the brazier-type gasification and carbonization furnace is designed to carry out dry distillation, anaerobic carbonization and have a high carbonization rate under high-temperature conditions. To improve the operation and maintenance efficiency, we formulate the operation of the brazier-type gasification and carbonization furnace as a dynamic multi-objective optimization problem (DMOP). Firstly, we analyze the dynamic factors in the work process of the brazier-type gasification and carbonization furnace, such as the equipment capacity, the operating conditions, and the biomass treated by the furnace. Afterward, we select the biochar yield and carbon monoxide emission as the dynamic objectives and model the DMOP. Finally, we apply three dynamic multiobjective evolutionary algorithms to solve the optimization problem so as to verify the effectiveness of the dynamic optimization approach in the gasification and carbonization furnace.
Collapse
Affiliation(s)
- Xi Zhang
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, East China University of Science and Technology, Shanghai 200237, China
| | - Guiyun Zhang
- Institute of Cotton Research, Shanxi Agricultural University, Yuncheng 044000, China
| | - Dong Zhang
- Discipline of Engineering and Energy, College of Science, Health, Engineering and Education, Murdoch University, Perth, WA 6150, Australia
| | - Liping Zhang
- Institute of Cotton Research, Shanxi Agricultural University, Yuncheng 044000, China
| | - Feng Qian
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, East China University of Science and Technology, Shanghai 200237, China
| |
Collapse
|
37
|
Sun S, Wang C, Zhao P, Kline GM, Grandjean JMD, Jiang X, Labaudiniere R, Wiseman RL, Kelly JW, Balch WE. Capturing the conversion of the pathogenic alpha-1-antitrypsin fold by ATF6 enhanced proteostasis. Cell Chem Biol 2023; 30:22-42.e5. [PMID: 36630963 PMCID: PMC9930901 DOI: 10.1016/j.chembiol.2022.12.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 11/07/2022] [Accepted: 12/19/2022] [Indexed: 01/12/2023]
Abstract
Genetic variation in alpha-1 antitrypsin (AAT) causes AAT deficiency (AATD) through liver aggregation-associated gain-of-toxic pathology and/or insufficient AAT activity in the lung manifesting as chronic obstructive pulmonary disease (COPD). Here, we utilize 71 AATD-associated variants as input through Gaussian process (GP)-based machine learning to study the correction of AAT folding and function at a residue-by-residue level by pharmacological activation of the ATF6 arm of the unfolded protein response (UPR). We show that ATF6 activators increase AAT neutrophil elastase (NE) inhibitory activity, while reducing polymer accumulation for the majority of AATD variants, including the prominent Z variant. GP-based profiling of the residue-by-residue response to ATF6 activators captures an unexpected role of the "gate" area in managing AAT-specific activity. Our work establishes a new spatial covariant (SCV) understanding of the convertible state of the protein fold in response to genetic perturbation and active environmental management by proteostasis enhancement for precision medicine.
Collapse
Affiliation(s)
- Shuhong Sun
- Department of Molecular Medicine, The Scripps Research Institute, La Jolla, CA, USA
| | - Chao Wang
- Department of Molecular Medicine, The Scripps Research Institute, La Jolla, CA, USA
| | - Pei Zhao
- Department of Molecular Medicine, The Scripps Research Institute, La Jolla, CA, USA
| | - Gabe M Kline
- Department of Chemistry, The Scripps Research Institute, La Jolla, CA, USA
| | | | - Xin Jiang
- Protego Biopharma, 10945 Vista Sorrento Parkway, San Diego, CA, USA
| | | | - R Luke Wiseman
- Department of Molecular Medicine, The Scripps Research Institute, La Jolla, CA, USA
| | - Jeffery W Kelly
- Department of Chemistry, The Scripps Research Institute, La Jolla, CA, USA
| | - William E Balch
- Department of Molecular Medicine, The Scripps Research Institute, La Jolla, CA, USA.
| |
Collapse
|
38
|
Feng SV, van den Boom W, De Iorio M, Thng GJ, Chan JKY, Chen HY, Tan KH, Kee MZL. Joint modelling of mental health markers through pregnancy: a Bayesian semi-parametric approach. J Appl Stat 2023; 51:388-405. [PMID: 38283054 PMCID: PMC10810649 DOI: 10.1080/02664763.2022.2154329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 09/23/2022] [Indexed: 01/14/2023]
Abstract
Maternal depression and anxiety through pregnancy have lasting societal impacts. It is thus crucial to understand the trajectories of its progression from preconception to postnatal period, and the risk factors associated with it. Within the Bayesian framework, we propose to jointly model seven outcomes, of which two are physiological and five non-physiological indicators of maternal depression and anxiety over time. We model the former two by a Gaussian process and the latter by an autoregressive model, while imposing a multidimensional Dirichlet process prior on the subject-specific random effects to account for subject heterogeneity and induce clustering. The model allows for the inclusion of covariates through a regression term. Our findings reveal four distinct clusters of trajectories of the seven health outcomes, characterising women's mental health progression from before to after pregnancy. Importantly, our results caution against the loose use of hair corticosteroids as a biomarker, or even a causal factor, for pregnancy mental health progression. Additionally, the regression analysis reveals a range of preconception determinants and risk factors for depressive and anxiety symptoms during pregnancy.
Collapse
Affiliation(s)
| | - Willem van den Boom
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Agency for Science, Technology and Research, Singapore Institute for Clinical Sciences, Singapore, Singapore
| | - Maria De Iorio
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Agency for Science, Technology and Research, Singapore Institute for Clinical Sciences, Singapore, Singapore
- Department of Statistical Science, University College London, London, UK
| | - Gladi J. Thng
- Agency for Science, Technology and Research, Singapore Institute for Clinical Sciences, Singapore, Singapore
| | - Jerry K. Y. Chan
- Department of Reproductive Medicine, KK Women's and Children's Hospital, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Helen Y. Chen
- Duke-NUS Medical School, Singapore, Singapore
- Department of Psychological Medicine, KK Women's and Children's Hospital, Singapore, Singapore
| | - Kok Hian Tan
- Duke-NUS Medical School, Singapore, Singapore
- Department of Maternal Fetal Medicine, KK Women's and Children's Hospital, Singapore, Singapore
| | - Michelle Z. L. Kee
- Agency for Science, Technology and Research, Singapore Institute for Clinical Sciences, Singapore, Singapore
| |
Collapse
|
39
|
Iqbal WA, Lisitsa A, Kapralov MV. Predicting plant Rubisco kinetics from RbcL sequence data using machine learning. J Exp Bot 2023; 74:638-650. [PMID: 36094849 PMCID: PMC9833099 DOI: 10.1093/jxb/erac368] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/12/2022] [Indexed: 06/15/2023]
Abstract
Ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco) is responsible for the conversion of atmospheric CO2 to organic carbon during photosynthesis, and often acts as a rate limiting step in the later process. Screening the natural diversity of Rubisco kinetics is the main strategy used to find better Rubisco enzymes for crop engineering efforts. Here, we demonstrate the use of Gaussian processes (GPs), a family of Bayesian models, coupled with protein encoding schemes, for predicting Rubisco kinetics from Rubisco large subunit (RbcL) sequence data. GPs trained on published experimentally obtained Rubisco kinetic datasets were applied to over 9000 sequences encoding RbcL to predict Rubisco kinetic parameters. Notably, our predicted kinetic values were in agreement with known trends, e.g. higher carboxylation turnover rates (Kcat) for Rubisco enzymes from C4 or crassulacean acid metabolism (CAM) species, compared with those found in C3 species. This is the first study demonstrating machine learning approaches as a tool for screening and predicting Rubisco kinetics, which could be applied to other enzymes.
Collapse
Affiliation(s)
- Wasim A Iqbal
- School of Natural and Environmental Sciences, Newcastle University, Newcastle upon Tyne, NE1 7RU, United Kingdom
| | - Alexei Lisitsa
- Department of Computer Science, University of Liverpool, Liverpool, L69 3BX, United Kingdom
| | | |
Collapse
|
40
|
Kevrekidis GA, Rapti Z, Drossinos Y, Kevrekidis PG, Barmann MA, Chen QY, Cuevas-Maraver J. Backcasting COVID-19: a physics-informed estimate for early case incidence. R Soc Open Sci 2022; 9:220329. [PMID: 36533196 PMCID: PMC9748501 DOI: 10.1098/rsos.220329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 11/17/2022] [Indexed: 06/17/2023]
Abstract
It is widely accepted that the number of reported cases during the first stages of the COVID-19 pandemic severely underestimates the number of actual cases. We leverage delay embedding theorems of Whitney and Takens and use Gaussian process regression to estimate the number of cases during the first 2020 wave based on the second wave of the epidemic in several European countries, South Korea and Brazil. We assume that the second wave was more accurately monitored, even though we acknowledge that behavioural changes occurred during the pandemic and region- (or country-) specific monitoring protocols evolved. We then construct a manifold diffeomorphic to that of the implied original dynamical system, using fatalities or hospitalizations only. Finally, we restrict the diffeomorphism to the reported cases coordinate of the dynamical system. Our main finding is that in the European countries studied, the actual cases are under-reported by as much as 50%. On the other hand, in South Korea-which had a proactive mitigation approach-a far smaller discrepancy between the actual and reported cases is predicted, with an approximately 18% predicted underestimation. We believe that our backcasting framework is applicable to other epidemic outbreaks where (due to limited or poor quality data) there is uncertainty around the actual cases.
Collapse
Affiliation(s)
- G. A. Kevrekidis
- Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01003, USA
| | - Z. Rapti
- Department of Mathematics and Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign, Urbana, IL 61820, USA
| | - Y. Drossinos
- European Commission, Joint Research Centre, I-21027 Ispra (VA), Italy
| | - P. G. Kevrekidis
- Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01003, USA
| | - M. A. Barmann
- Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01003, USA
| | - Q. Y. Chen
- Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01003, USA
| | - J. Cuevas-Maraver
- Grupo de Física No Lineal, Departamento de Física Aplicada I, Universidad de Sevilla. Escuela Politécnica Superior, C/ Virgen de África, 7, 41012 Sevilla, Spain
- Instituto de Matemáticas de la Universidad de Sevilla (IMUS). Edificio Celestino Mutis. Avda. Reina Mercedes s/n, 41012 Sevilla, Spain
| |
Collapse
|
41
|
Miran W, Huang W, Long X, Imamura G, Okamoto A. Multivariate landscapes constructed by Bayesian estimation over five hundred microbial electrochemical time profiles. Patterns (N Y) 2022; 3:100610. [PMID: 36419444 PMCID: PMC9676538 DOI: 10.1016/j.patter.2022.100610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 08/24/2022] [Accepted: 09/21/2022] [Indexed: 11/07/2022]
Abstract
Data science emerges as a promising approach for studying and optimizing complex multivariable phenomena, such as the interaction between microorganisms and electrodes. However, there have been limited reports on a bioelectrochemical system that can produce a reliable database until date. Herein, we developed a high-throughput platform with low deviation to apply two-dimensional (2D) Bayesian estimation for electrode potential and redox-active additive concentration to optimize microbial current production (I c ). A 96-channel potentiostat represents <10% SD for maximum I c . 576 time-I c profiles were obtained in 120 different electrolyte and potentiostatic conditions with two model electrogenic bacteria, Shewanella and Geobacter. Acquisition functions showed the highest performance per concentration for riboflavin over a wide potential range in Shewanella. The underlying mechanism was validated by electrochemical analysis with mutant strains lacking outer-membrane redox enzymes. We anticipate that the combination of data science and high-throughput electrochemistry will greatly accelerate a breakthrough for bioelectrochemical technologies.
Collapse
Affiliation(s)
- Waheed Miran
- International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
- School of Chemical and Materials Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan
| | - Wenyuan Huang
- International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
- Graduate School of Chemical Sciences and Engineering, Hokkaido University, North 13 West 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan
| | - Xizi Long
- International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
| | - Gaku Imamura
- International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
- Graduate School of Information Science and Technology, Osaka University, 1-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Akihiro Okamoto
- International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
- Graduate School of Chemical Sciences and Engineering, Hokkaido University, North 13 West 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan
| |
Collapse
|
42
|
Ameli S, Shadden SC. Interpolating log-determinant and trace of the powers of matrix A + t B. Stat Comput 2022; 32:108. [PMID: 36397998 PMCID: PMC9649515 DOI: 10.1007/s11222-022-10173-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 10/21/2022] [Indexed: 06/16/2023]
Abstract
We develop heuristic interpolation methods for the functions t ↦ log det A + t B and t ↦ trace ( A + t B ) p where the matrices A and B are Hermitian and positive (semi) definite and p and t are real variables. These functions are featured in many applications in statistics, machine learning, and computational physics. The presented interpolation functions are based on the modification of sharp bounds for these functions. We demonstrate the accuracy and performance of the proposed method with numerical examples, namely, the marginal maximum likelihood estimation for Gaussian process regression and the estimation of the regularization parameter of ridge regression with the generalized cross-validation method.
Collapse
Affiliation(s)
- Siavash Ameli
- Mechanical Engineering, University of California, Berkeley, CA 94720 USA
| | - Shawn C. Shadden
- Mechanical Engineering, University of California, Berkeley, CA 94720 USA
| |
Collapse
|
43
|
Ziatdinov M, Liu Y, Kelley K, Vasudevan R, Kalinin SV. Bayesian Active Learning for Scanning Probe Microscopy: From Gaussian Processes to Hypothesis Learning. ACS Nano 2022; 16:13492-13512. [PMID: 36066996 DOI: 10.1021/acsnano.2c05303] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Recent progress in machine learning methods and the emerging availability of programmable interfaces for scanning probe microscopes (SPMs) have propelled automated and autonomous microscopies to the forefront of attention of the scientific community. However, enabling automated microscopy requires the development of task-specific machine learning methods, understanding the interplay between physics discovery and machine learning, and fully defined discovery workflows. This, in turn, requires balancing the physical intuition and prior knowledge of the domain scientist with rewards that define experimental goals and machine learning algorithms that can translate these to specific experimental protocols. Here, we discuss the basic principles of Bayesian active learning and illustrate its applications for SPM. We progress from the Gaussian process as a simple data-driven method and Bayesian inference for physical models as an extension of physics-based functional fits to more complex deep kernel learning methods, structured Gaussian processes, and hypothesis learning. These frameworks allow for the use of prior data, the discovery of specific functionalities as encoded in spectral data, and exploration of physical laws manifesting during the experiment. The discussed framework can be universally applied to all techniques combining imaging and spectroscopy, SPM methods, nanoindentation, electron microscopy and spectroscopy, and chemical imaging methods and can be particularly impactful for destructive or irreversible measurements.
Collapse
Affiliation(s)
| | | | | | | | - Sergei V Kalinin
- Department of Materials Sciences and Engineering, University of Tennessee, Knoxville, Tennessee 37996, United States
| |
Collapse
|
44
|
Man J, Zielinski MD, Das D, Sir MY, Wutthisirisart P, Camazine M, Pasupathy KS. Non-invasive Hemoglobin Measurement Predictive Analytics with Missing Data and Accuracy Improvement Using Gaussian Process and Functional Regression Model. J Med Syst 2022; 46:72. [PMID: 36156743 DOI: 10.1007/s10916-022-01854-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 08/17/2022] [Indexed: 11/29/2022]
Abstract
Recent use of noninvasive and continuous hemoglobin (SpHb) concentration monitor has emerged as an alternative to invasive laboratory-based hematological analysis. Unlike delayed laboratory based measures of hemoglobin (HgB), SpHb monitors can provide real-time information about the HgB levels. Real-time SpHb measurements will offer healthcare providers with warnings and early detections of abnormal health status, e.g., hemorrhagic shock, anemia, and thus support therapeutic decision-making, as well as help save lives. However, the finger-worn CO-Oximeter sensors used in SpHb monitors often get detached or have to be removed, which causes missing data in the continuous SpHb measurements. Missing data among SpHb measurements reduce the trust in the accuracy of the device, influence the effectiveness of hemorrhage interventions and future HgB predictions. A model with imputation and prediction method is investigated to deal with missing values and improve prediction accuracy. The Gaussian process and functional regression methods are proposed to impute missing SpHb data and make predictions on laboratory-based HgB measurements. Within the proposed method, multiple choices of sub-models are considered. The proposed method shows a significant improvement in accuracy based on a real-data study. Proposed method shows superior performance with the real data, within the proposed framework, different choices of sub-models are discussed and the usage recommendation is provided accordingly. The modeling framework can be extended to other application scenarios with missing values.
Collapse
Affiliation(s)
- Jianing Man
- School of Mechanical Engineering, Institute of Industrial and Intelligent System Engineering, Beijing Institute of Technology, Beijing, China. .,Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA.
| | | | - Devashish Das
- Department of Industrial and Management Systems Engineering, University of South Florida, Tempa, FL, USA
| | | | - Phichet Wutthisirisart
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
| | | | - Kalyan S Pasupathy
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA. .,Department of Biomedical and Health Information Sciences, University of Illinois at Chicago, Chicago, IL, USA.
| |
Collapse
|
45
|
Kim S, Kim J. Efficient Clustering for Continuous Occupancy Mapping Using a Mixture of Gaussian Processes. Sensors (Basel) 2022; 22:6832. [PMID: 36146179 PMCID: PMC9505052 DOI: 10.3390/s22186832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/04/2022] [Accepted: 09/07/2022] [Indexed: 06/16/2023]
Abstract
This paper proposes a novel method for occupancy map building using a mixture of Gaussian processes. Gaussian processes have proven to be highly flexible and accurate for a robotic occupancy mapping problem, yet the high computational complexity has been a critical barrier for large-scale applications. We consider clustering the data into small, manageable subsets and applying a mixture of Gaussian processes. One of the problems in clustering is that the number of groups is not known a priori, thus requiring inputs from experts. We propose two efficient clustering methods utilizing (1) a Dirichlet process and (2) geometrical information in the context of occupancy mapping. We will show that the Dirichlet process-based clustering can significantly speed up the training step of the Gaussian process and if geometrical features, such as line features, are available, they can further improve the clustering accuracy. We will provide simulation results, analyze the performance and demonstrate the benefits of the proposed methods.
Collapse
Affiliation(s)
- Soohwan Kim
- Department of Artificial Intelligence Software Technology, Sunmoon University, 70, Sunmoon-ro 221 beon-gil, Tangjeong-myeon, Asan-si 31460, Chungcheongnam-do, Korea
| | - Jonghyuk Kim
- The Center of Excellence for Cybercrimes and Digital Forensics, Naif Arab University for Security Sciences, Riyadh 11452, Saudi Arabia
| |
Collapse
|
46
|
Dykstra G, Reynolds B, Smith R, Zhou K, Liu Y. Electropolymerized Molecularly Imprinted Polymer Synthesis Guided by an Integrated Data-Driven Framework for Cortisol Detection. ACS Appl Mater Interfaces 2022; 14:25972-25983. [PMID: 35536156 DOI: 10.1021/acsami.2c02474] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Molecularly imprinted polymers (MIPs), often called "synthetic antibodies", are highly attractive as artificial receptors with tailored biomolecular recognition to construct biosensors. Electropolymerization is a fast and facile method to directly synthesize MIP sensing elements in situ on the working electrode, enabling ultra-low-cost and easy-to-manufacture electrochemical biosensors. However, due to the high dimensional design space of electropolymerized MIPs (e-MIPs), the development of e-MIPs is challenging and lengthy based on trial and error without proper guidelines. Leveraging machine learning techniques in building the quantitative relationship between synthesis parameters and corresponding sensing performance, e-MIPs' development and optimization can be facilitated. We herein demonstrate a case study on the synthesis of cortisol-imprinted polypyrrole for cortisol detection, where e-MIPs are fabricated with 72 sets of synthesis parameters with replicates. Their sensing performances are measured using a 12-channel potentiostat to construct the subsequent data-driven framework. The Gaussian process (GP) is employed as the mainstay of the integrated framework, which can account for various uncertainties in the synthesis and measurements. The Sobol index-based global sensitivity is then performed upon the GP surrogate model to elucidate the impact of e-MIPs' synthesis parameters on sensing performance and interrelations among parameters. Based on the prediction of the established GP model and local sensitivity analysis, synthesis parameters are optimized and validated by experiment, which leads to remarkable sensing performance enhancement (1.5-fold increase in sensitivity). The proposed framework is novel in biosensor development, which is expandable and also generally applicable to the development of other sensing materials.
Collapse
Affiliation(s)
- Grace Dykstra
- Department of Chemical Engineering, Michigan Technological University, 1400 Townsend Drive, Houghton, Michigan 49931, United States
| | - Benjamin Reynolds
- Department of Chemical Engineering, Michigan Technological University, 1400 Townsend Drive, Houghton, Michigan 49931, United States
| | - Riley Smith
- Department of Chemical Engineering, Michigan Technological University, 1400 Townsend Drive, Houghton, Michigan 49931, United States
| | - Kai Zhou
- Department of Mechanical Engineering-Engineering Mechanics, Michigan Technological University, 1400 Townsend Drive, Houghton, Michigan 49931, United States
| | - Yixin Liu
- Department of Chemical Engineering, Michigan Technological University, 1400 Townsend Drive, Houghton, Michigan 49931, United States
| |
Collapse
|
47
|
Lee SH, Kim D, Opfer JE, Pitt MA, Myung JI. A number-line task with a Bayesian active learning algorithm provides insights into the development of non-symbolic number estimation. Psychon Bull Rev 2022; 29:971-984. [PMID: 34918270 DOI: 10.3758/s13423-021-02041-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/11/2021] [Indexed: 01/29/2023]
Abstract
To characterize numerical representations, the number-line task asks participants to estimate the location of a given number on a line flanked with zero and an upper-bound number. An open question is whether estimates for symbolic numbers (e.g., Arabic numerals) and non-symbolic numbers (e.g., number of dots) rely on common processes with a common developmental pathway. To address this question, we explored whether well-established findings in symbolic number-line estimation generalize to non-symbolic number-line estimation. For exhaustive investigations without sacrificing data quality, we applied a novel Bayesian active learning algorithm, dubbed Gaussian process active learning (GPAL), that adaptively optimizes experimental designs. The results showed that the non-symbolic number estimation in participants of diverse ages (5-73 years old, n = 238) exhibited three characteristic features of symbolic number estimation.
Collapse
Affiliation(s)
- Sang Ho Lee
- Department of Psychology, The Ohio State University, 212 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA.
| | - Dan Kim
- Department of Psychology, The Ohio State University, 212 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| | - John E Opfer
- Department of Psychology, The Ohio State University, 212 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| | - Mark A Pitt
- Department of Psychology, The Ohio State University, 212 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| | - Jay I Myung
- Department of Psychology, The Ohio State University, 212 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| |
Collapse
|
48
|
Lazarus A, Dalton D, Husmeier D, Gao H. Sensitivity analysis and inverse uncertainty quantification for the left ventricular passive mechanics. Biomech Model Mechanobiol 2022; 21:953-982. [PMID: 35377030 PMCID: PMC9132878 DOI: 10.1007/s10237-022-01571-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 02/28/2022] [Indexed: 01/08/2023]
Abstract
Personalized computational cardiac models are considered to be a unique and powerful tool in modern cardiology, integrating the knowledge of physiology, pathology and fundamental laws of mechanics in one framework. They have the potential to improve risk prediction in cardiac patients and assist in the development of new treatments. However, in order to use these models for clinical decision support, it is important that both the impact of model parameter perturbations on the predicted quantities of interest as well as the uncertainty of parameter estimation are properly quantified, where the first task is a priori in nature (meaning independent of any specific clinical data), while the second task is carried out a posteriori (meaning after specific clinical data have been obtained). The present study addresses these challenges for a widely used constitutive law of passive myocardium (the Holzapfel-Ogden model), using global sensitivity analysis (SA) to address the first challenge, and inverse-uncertainty quantification (I-UQ) for the second challenge. The SA is carried out on a range of different input parameters to a left ventricle (LV) model, making use of computationally efficient Gaussian process (GP) surrogate models in place of the numerical forward simulator. The results of the SA are then used to inform a low-order reparametrization of the constitutive law for passive myocardium under consideration. The quality of this parameterization in the context of an inverse problem having observed noisy experimental data is then quantified with an I-UQ study, which again makes use of GP surrogate models. The I-UQ is carried out in a Bayesian manner using Markov Chain Monte Carlo, which allows for full uncertainty quantification of the material parameter estimates. Our study reveals insights into the relation between SA and I-UQ, elucidates the dependence of parameter sensitivity and estimation uncertainty on external factors, like LV cavity pressure, and sheds new light on cardio-mechanic model formulation, with particular focus on the Holzapfel-Ogden myocardial model.
Collapse
Affiliation(s)
- Alan Lazarus
- School of Mathematics and Statistics, University of Glasgow, Glasgow, UK
| | - David Dalton
- School of Mathematics and Statistics, University of Glasgow, Glasgow, UK
| | - Dirk Husmeier
- School of Mathematics and Statistics, University of Glasgow, Glasgow, UK
| | - Hao Gao
- School of Mathematics and Statistics, University of Glasgow, Glasgow, UK
| |
Collapse
|
49
|
Zhang K, Karanth S, Patel B, Murphy R, Jiang X. A multi-task Gaussian process self-attention neural network for real-time prediction of the need for mechanical ventilators in COVID-19 patients. J Biomed Inform 2022; 130:104079. [PMID: 35489596 PMCID: PMC9044651 DOI: 10.1016/j.jbi.2022.104079] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 04/06/2022] [Accepted: 04/18/2022] [Indexed: 02/04/2023]
Abstract
OBJECTIVE The Coronavirus Disease 2019 (COVID-19) pandemic has overwhelmed the capacity of healthcare resources and posed a challenge for worldwide hospitals. The ability to distinguish potentially deteriorating patients from the rest helps facilitate reasonable allocation of medical resources, such as ventilators, hospital beds, and human resources. The real-time accurate prediction of a patient's risk scores could also help physicians to provide earlier respiratory support for the patient and reduce the risk of mortality. METHODS We propose a robust real-time prediction model for the in-hospital COVID-19 patients' probability of requiring mechanical ventilation (MV). The end-to-end neural network model incorporates the Multi-task Gaussian Process to handle the irregular sampling rate in observational data together with a self-attention neural network for the prediction task. RESULTS We evaluate our model on a large database with 9,532 nationwide in-hospital patients with COVID-19. The model demonstrates significant robustness and consistency improvements compared to conventional machine learning models. The proposed prediction model also shows performance improvements in terms of area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC) compared to various deep learning models, especially at early times after a patient's hospital admission. CONCLUSION The availability of large and real-time clinical data calls for new methods to make the best use of them for real-time patient risk prediction. It is not ideal for simplifying the data for traditional methods or for making unrealistic assumptions that deviate from observation's true dynamics. We demonstrate a pilot effort to harmonize cross-sectional and longitudinal information for mechanical ventilation needing prediction.
Collapse
Affiliation(s)
- Kai Zhang
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX 77030, USA.
| | - Siddharth Karanth
- Department of Internal Medicine, McGovern Medical School of The University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Bela Patel
- Department of Internal Medicine, McGovern Medical School of The University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Robert Murphy
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Xiaoqian Jiang
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| |
Collapse
|
50
|
Wang X, Jin Y, Schmitt S, Olhofer M. Transfer Learning Based Co-Surrogate Assisted Evolutionary Bi-Objective Optimization for Objectives with Non-Uniform Evaluation Times. Evol Comput 2022; 30:221-251. [PMID: 34739055 DOI: 10.1162/evco_a_00300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 10/26/2021] [Indexed: 06/13/2023]
Abstract
Most existing multiobjective evolutionary algorithms (MOEAs) implicitly assume that each objective function can be evaluated within the same period of time. Typically. this is untenable in many real-world optimization scenarios where evaluation of different objectives involves different computer simulations or physical experiments with distinct time complexity. To address this issue, a transfer learning scheme based on surrogate-assisted evolutionary algorithms (SAEAs) is proposed, in which a co-surrogate is adopted to model the functional relationship between the fast and slow objective functions and a transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective. Our experimental results on DTLZ and UF test suites demonstrate that the proposed algorithm is competitive for solving bi-objective optimization where objectives have non-uniform evaluation times.
Collapse
Affiliation(s)
- Xilu Wang
- Department of Computer Science, University of Surrey, Guildford, GU2 7XH, United Kingdom
| | - Yaochu Jin
- Faculty of Technology, Bielefeld University, D-33615 Bielefeld, Germany
- Department of Computer Science, University of Surrey, Guildford, GU2 7XH, United Kingdom
| | - Sebastian Schmitt
- Honda Research Institute Europe GmbH, Carl-Legien-Strasse 30, D-63073 Offenbach/Main, Germany
| | - Markus Olhofer
- Honda Research Institute Europe GmbH, Carl-Legien-Strasse 30, D-63073 Offenbach/Main, Germany
| |
Collapse
|