1
|
Niven RK, Cordier L, Mohammad-Djafari A, Abel M, Quade M. Dynamical system identification, model selection, and model uncertainty quantification by Bayesian inference. CHAOS (WOODBURY, N.Y.) 2024; 34:083140. [PMID: 39191246 DOI: 10.1063/5.0200684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 08/04/2024] [Indexed: 08/29/2024]
Abstract
This study presents a Bayesian maximum a posteriori (MAP) framework for dynamical system identification from time-series data. This is shown to be equivalent to a generalized Tikhonov regularization, providing a rational justification for the choice of the residual and regularization terms, respectively, from the negative logarithms of the likelihood and prior distributions. In addition to the estimation of model coefficients, the Bayesian interpretation gives access to the full apparatus for Bayesian inference, including the ranking of models, the quantification of model uncertainties, and the estimation of unknown (nuisance) hyperparameters. Two Bayesian algorithms, joint MAP and variational Bayesian approximation, are compared to the least absolute shrinkage and selection operator (LASSO), ridge regression, and the sparse identification of nonlinear dynamics (SINDy) algorithms for sparse regression by application to several dynamical systems with added Gaussian or Laplace noise. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives Gaussian posterior and evidence distributions, in which the numerator terms can be expressed in terms of the Mahalanobis distance or "Gaussian norm" ||y-y^||M-12=(y-y^)⊤M-1(y-y^), where y is a vector variable, y^ is its estimator, and M is the covariance matrix. The posterior Gaussian norm is shown to provide a robust metric for quantitative model selection for the different systems and noise models examined.
Collapse
Affiliation(s)
- Robert K Niven
- School of Engineering and Technology, The University of New South Wales, Canberra, ACT 2600, Australia
| | - Laurent Cordier
- Institut Pprime, CNRS-Université de Poitiers-ISAE-ENSMA, 86360 Chasseneuil-du-Poitou, France
| | - Ali Mohammad-Djafari
- Laboratoire des Signaux et Systèmes (L2S), CentraleSupélec, 91190 Gif-sur-Yvette, France
| | | | | |
Collapse
|
2
|
Shaheen H, Melnik R, Singh S. Data-driven Stochastic Model for Quantifying the Interplay Between Amyloid-beta and Calcium Levels in Alzheimer's Disease. Stat Anal Data Min 2024; 17:e11679. [PMID: 38646460 PMCID: PMC11031189 DOI: 10.1002/sam.11679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 03/23/2024] [Indexed: 04/23/2024]
Abstract
The abnormal aggregation of extracellular amyloid-β ( A β ) in senile plaques resulting in calcium C a + 2 dyshomeostasis is one of the primary symptoms of Alzheimer's disease (AD). Significant research efforts have been devoted in the past to better understand the underlying molecular mechanisms driving A β deposition and C a + 2 dysregulation. Importantly, synaptic impairments, neuronal loss, and cognitive failure in AD patients are all related to the buildup of intraneuronal A β accumulation. Moreover, increasing evidence show a feed-forward loop between A β and C a + 2 levels, i.e. A β disrupts neuronal C a + 2 levels, which in turn affects the formation of A β . To better understand this interaction, we report a novel stochastic model where we analyze the positive feedback loop between A β and C a + 2 using ADNI data. A good therapeutic treatment plan for AD requires precise predictions. Stochastic models offer an appropriate framework for modelling AD since AD studies are observational in nature and involve regular patient visits. The etiology of AD may be described as a multi-state disease process using the approximate Bayesian computation method. So, utilizing ADNI data from 2-year visits for AD patients, we employ this method to investigate the interplay between A β and C a + 2 levels at various disease development phases. Incorporating the ADNI data in our physics-based Bayesian model, we discovered that a sufficiently large disruption in either A β metabolism or intracellular C a + 2 homeostasis causes the relative growth rate in both C a + 2 and A β , which corresponds to the development of AD. The imbalance of C a + 2 ions causes A β disorders by directly or indirectly affecting a variety of cellular and subcellular processes, and the altered homeostasis may worsen the abnormalities of C a + 2 ion transportation and deposition. This suggests that altering the C a + 2 balance or the balance between A β and C a + 2 by chelating them may be able to reduce disorders associated with AD and open up new research possibilities for AD therapy.
Collapse
Affiliation(s)
- Hina Shaheen
- Faculty of Science, University of Manitoba, Winnipeg, MB R3T 2N2, Canada
| | - Roderick Melnik
- MS2Discovery Interdisciplinary Research Institute, Wilfrid Laurier University, Waterloo, ON N2L 3C5, Canada
| | - Sundeep Singh
- Faculty of Sustainable Design Engineering, University of Prince Edward Island, Charlottetown, PE C1A 4P3, Canada
| | - The Alzheimer’s Disease Neuroimaging Initiative
- Data used in preparation of this article were generated by the Alzheimer’s Disease Metabolomics Consortium (ADMC). As such, the investigators within the ADMC provided data, but did not participate in the analysis or writing of this report. A complete listing of ADMC investigators can be found at: https://sites.duke.edu/adnimetab/team/
| |
Collapse
|
3
|
Parr T, Holmes E, Friston KJ, Pezzulo G. Cognitive effort and active inference. Neuropsychologia 2023; 184:108562. [PMID: 37080424 PMCID: PMC10636588 DOI: 10.1016/j.neuropsychologia.2023.108562] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 04/03/2023] [Accepted: 04/11/2023] [Indexed: 04/22/2023]
Abstract
This paper aims to integrate some key constructs in the cognitive neuroscience of cognitive control and executive function by formalising the notion of cognitive (or mental) effort in terms of active inference. To do so, we call upon a task used in neuropsychology to assess impulse inhibition-a Stroop task. In this task, participants must suppress the impulse to read a colour word and instead report the colour of the text of the word. The Stroop task is characteristically effortful, and we unpack a theory of mental effort in which, to perform this task accurately, participants must overcome prior beliefs about how they would normally act. However, our interest here is not in overt action, but in covert (mental) action. Mental actions change our beliefs but have no (direct) effect on the outside world-much like deploying covert attention. This account of effort as mental action lets us generate multimodal (choice, reaction time, and electrophysiological) data of the sort we might expect from a human participant engaging in this task. We analyse how parameters determining cognitive effort influence simulated responses and demonstrate that-when provided only with performance data-these parameters can be recovered, provided they are within a certain range.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, UK.
| | - Emma Holmes
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, UK
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, UK
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| |
Collapse
|
4
|
Duan X, Rubin JE, Swigon D. Rigorous Mapping of Data to Qualitative Properties of Parameter Values and Dynamics: A Case Study on a Two-Variable Lotka-Volterra System. Bull Math Biol 2023; 85:64. [PMID: 37270711 DOI: 10.1007/s11538-023-01165-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/02/2023] [Indexed: 06/05/2023]
Abstract
In this work, we describe mostly analytical work related to a novel approach to parameter identification for a two-variable Lotka-Volterra (LV) system. Specifically, this approach is qualitative, in that we aim not to determine precise values of model parameters but rather to establish relationships among these parameter values and properties of the trajectories that they generate, based on a small number of available data points. In this vein, we prove a variety of results about the existence, uniqueness, and signs of model parameters for which the trajectory of the system passes exactly through a set of three given data points, representing the smallest possible data set needed for identification of model parameter values. We find that in most situations such a data set determines these values uniquely; we also thoroughly investigate the alternative cases, which result in nonuniqueness or even nonexistence of model parameter values that fit the data. In addition to results about identifiability, our analysis provides information about the long-term dynamics of solutions of the LV system directly from the data without the necessity of estimating specific parameter values.
Collapse
Affiliation(s)
- Xiaoyu Duan
- Lab of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, 12 South Dr., Bethesda, MD, 20892, USA
| | - Jonathan E Rubin
- Department of Mathematics, University of Pittsburgh, 301 Thackeray Avenue, Pittsburgh, PA, 15260, USA
| | - David Swigon
- Department of Mathematics, University of Pittsburgh, 301 Thackeray Avenue, Pittsburgh, PA, 15260, USA.
| |
Collapse
|
5
|
Qu M, Chang C, Wang J, Hu J, Hu N. Nonnegative block-sparse Bayesian learning algorithm for EEG brain source localization. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
6
|
Zhang S, Zhu Z, Zhang B, Feng B, Yu T, Li Z, Zhang Z, Huang G, Liang Z. Overall optimization of CSP based on ensemble learning for motor imagery EEG decoding. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
7
|
Maradesa A, Py B, Quattrocchi E, Ciucci F. The probabilistic deconvolution of the distribution of relaxation times with finite Gaussian processes. Electrochim Acta 2022. [DOI: 10.1016/j.electacta.2022.140119] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
8
|
Abstract
A hallmark of adaptation in humans and other animals is our ability to control how we think and behave across different settings. Research has characterized the various forms cognitive control can take-including enhancement of goal-relevant information, suppression of goal-irrelevant information, and overall inhibition of potential responses-and has identified computations and neural circuits that underpin this multitude of control types. Studies have also identified a wide range of situations that elicit adjustments in control allocation (e.g., those eliciting signals indicating an error or increased processing conflict), but the rules governing when a given situation will give rise to a given control adjustment remain poorly understood. Significant progress has recently been made on this front by casting the allocation of control as a decision-making problem. This approach has developed unifying and normative models that prescribe when and how a change in incentives and task demands will result in changes in a given form of control. Despite their successes, these models, and the experiments that have been developed to test them, have yet to face their greatest challenge: deciding how to select among the multiplicity of configurations that control can take at any given time. Here, we will lay out the complexities of the inverse problem inherent to cognitive control allocation, and their close parallels to inverse problems within motor control (e.g., choosing between redundant limb movements). We discuss existing solutions to motor control's inverse problems drawn from optimal control theory, which have proposed that effort costs act to regularize actions and transform motor planning into a well-posed problem. These same principles may help shed light on how our brains optimize over complex control configuration, while providing a new normative perspective on the origins of mental effort.
Collapse
|
9
|
Javidan M, Esfandi H, Pashaie R. Optimization of data acquisition operation in optical tomography based on estimation theory. BIOMEDICAL OPTICS EXPRESS 2021; 12:5670-5690. [PMID: 34692208 PMCID: PMC8515978 DOI: 10.1364/boe.432687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/14/2021] [Accepted: 07/31/2021] [Indexed: 06/13/2023]
Abstract
The data acquisition process is occasionally the most time consuming and costly operation in tomography. Currently, raster scanning is still the common practice in making sequential measurements in most tomography scanners. Raster scanning is known to be slow and such scanners usually cannot catch up with the speed of changes when imaging dynamically evolving objects. In this research, we studied the possibility of using estimation theory and our prior knowledge about the sample under test to reduce the number of measurements required to achieve a given image quality. This systematic approach for optimization of the data acquisition process also provides a vision toward improving the geometry of the scanner and reducing the effect of noise, including the common state-dependent noise of detectors. The theory is developed in the article and simulations are provided to better display discussed concepts.
Collapse
Affiliation(s)
- Mahshad Javidan
- Electrical Engineering and Computer Science Department, Florida Atlantic University, Boca Raton, FL 33432, USA
- Authors contributed equally
| | - Hadi Esfandi
- Electrical Engineering and Computer Science Department, Florida Atlantic University, Boca Raton, FL 33432, USA
- Authors contributed equally
| | - Ramin Pashaie
- Electrical Engineering and Computer Science Department, Florida Atlantic University, Boca Raton, FL 33432, USA
| |
Collapse
|
10
|
Blatter D, Ray A, Key K. Two-dimensional Bayesian inversion of magnetotelluric data using trans-dimensional Gaussian processes. GEOPHYSICAL JOURNAL INTERNATIONAL 2021; 226:548-563. [PMID: 33994835 PMCID: PMC8102138 DOI: 10.1093/gji/ggab110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 01/08/2021] [Accepted: 03/22/2021] [Indexed: 06/12/2023]
Abstract
Bayesian inversion of electromagnetic data produces crucial uncertainty information on inferred subsurface resistivity. Due to their high computational cost, however, Bayesian inverse methods have largely been restricted to computationally expedient 1-D resistivity models. In this study, we successfully demonstrate, for the first time, a fully 2-D, trans-dimensional Bayesian inversion of magnetotelluric (MT) data. We render this problem tractable from a computational standpoint by using a stochastic interpolation algorithm known as a Gaussian process (GP) to achieve a parsimonious parametrization of the model vis-a-vis the dense parameter grids used in numerical forward modelling codes. The GP links a trans-dimensional, parallel tempered Markov chain Monte Carlo sampler, which explores the parsimonious model space, to MARE2DEM, an adaptive finite element forward solver. MARE2DEM computes the model response using a dense parameter mesh with resistivity assigned via the GP model. We demonstrate the new trans-dimensional GP sampler by inverting both synthetic and field MT data for 2-D models of electrical resistivity, with the field data example converging within 10 d on 148 cores, a non-negligible but tractable computational cost. For a field data inversion, our algorithm achieves a parameter reduction of over 32× compared to the fixed parameter grid used for the MARE2DEM regularized inversion. Resistivity probability distributions computed from the ensemble of models produced by the inversion yield credible intervals and interquartile plots that quantitatively show the non-linear 2-D uncertainty in model structure. This uncertainty could then be propagated to other physical properties that impact resistivity including bulk composition, porosity and pore-fluid content.
Collapse
Affiliation(s)
| | - Anandaroop Ray
- Geoscience Australia, Symonston, Australian Capital Territory, 2609, Australia
| | - Kerry Key
- Lamont-Doherty Earth Observatory, Columbia University, Palisades, NY, 10964, USA
| |
Collapse
|
11
|
Hashemi A, Cai C, Kutyniok G, Müller KR, Nagarajan SS, Haufe S. Unification of sparse Bayesian learning algorithms for electromagnetic brain imaging with the majorization minimization framework. Neuroimage 2021; 239:118309. [PMID: 34182100 PMCID: PMC8433122 DOI: 10.1016/j.neuroimage.2021.118309] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Revised: 05/17/2021] [Accepted: 06/23/2021] [Indexed: 11/23/2022] Open
Abstract
Methods for electro- or magnetoencephalography (EEG/MEG) based brain source imaging (BSI) using sparse Bayesian learning (SBL) have been demonstrated to achieve excellent performance in situations with low numbers of distinct active sources, such as event-related designs. This paper extends the theory and practice of SBL in three important ways. First, we reformulate three existing SBL algorithms under the majorization-minimization (MM) framework. This unification perspective not only provides a useful theoretical framework for comparing different algorithms in terms of their convergence behavior, but also provides a principled recipe for constructing novel algorithms with specific properties by designing appropriate bounds of the Bayesian marginal likelihood function. Second, building on the MM principle, we propose a novel method called LowSNR-BSI that achieves favorable source reconstruction performance in low signal-to-noise-ratio (SNR) settings. Third, precise knowledge of the noise level is a crucial requirement for accurate source reconstruction. Here we present a novel principled technique to accurately learn the noise variance from the data either jointly within the source reconstruction procedure or using one of two proposed cross-validation strategies. Empirically, we could show that the monotonous convergence behavior predicted from MM theory is confirmed in numerical experiments. Using simulations, we further demonstrate the advantage of LowSNR-BSI over conventional SBL in low-SNR regimes, and the advantage of learned noise levels over estimates derived from baseline data. To demonstrate the usefulness of our novel approach, we show neurophysiologically plausible source reconstructions on averaged auditory evoked potential data.
Collapse
Affiliation(s)
- Ali Hashemi
- Uncertainty, Inverse Modeling and Machine Learning Group, Technische Universität Berlin, Germany; Machine Learning Group, Technische Universität Berlin, Germany; Berlin Center for Advanced Neuroimaging (BCAN), Charité - Universitätsmedizin Berlin, Germany; Institut für Mathematik, Technische Universität Berlin, Germany.
| | - Chang Cai
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA; National Engineering Research Center for E-Learning, Central China Normal University, China
| | - Gitta Kutyniok
- Mathematisches Institut, Ludwig-Maximilians-Universität München, Germany; Department of Physics and Technology, University of Tromsø, Norway
| | - Klaus-Robert Müller
- Machine Learning Group, Technische Universität Berlin, Germany; BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany; Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea; Max Planck Institute for Informatics, Saarbrücken, Germany.
| | - Srikantan S Nagarajan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA.
| | - Stefan Haufe
- Uncertainty, Inverse Modeling and Machine Learning Group, Technische Universität Berlin, Germany; Berlin Center for Advanced Neuroimaging (BCAN), Charité - Universitätsmedizin Berlin, Germany; Mathematical Modelling and Data Analysis Department, Physikalisch-Technische Bundesanstalt Braunschweig und Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany.
| |
Collapse
|
12
|
Salinet J, Molero R, Schlindwein FS, Karel J, Rodrigo M, Rojo-Álvarez JL, Berenfeld O, Climent AM, Zenger B, Vanheusden F, Paredes JGS, MacLeod R, Atienza F, Guillem MS, Cluitmans M, Bonizzi P. Electrocardiographic Imaging for Atrial Fibrillation: A Perspective From Computer Models and Animal Experiments to Clinical Value. Front Physiol 2021; 12:653013. [PMID: 33995122 PMCID: PMC8120164 DOI: 10.3389/fphys.2021.653013] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 03/22/2021] [Indexed: 01/16/2023] Open
Abstract
Electrocardiographic imaging (ECGI) is a technique to reconstruct non-invasively the electrical activity on the heart surface from body-surface potential recordings and geometric information of the torso and the heart. ECGI has shown scientific and clinical value when used to characterize and treat both atrial and ventricular arrhythmias. Regarding atrial fibrillation (AF), the characterization of the electrical propagation and the underlying substrate favoring AF is inherently more challenging than for ventricular arrhythmias, due to the progressive and heterogeneous nature of the disease and its manifestation, the small volume and wall thickness of the atria, and the relatively large role of microstructural abnormalities in AF. At the same time, ECGI has the advantage over other mapping technologies of allowing a global characterization of atrial electrical activity at every atrial beat and non-invasively. However, since ECGI is time-consuming and costly and the use of electrical mapping to guide AF ablation is still not fully established, the clinical value of ECGI for AF is still under assessment. Nonetheless, AF is known to be the manifestation of a complex interaction between electrical and structural abnormalities and therefore, true electro-anatomical-structural imaging may elucidate important key factors of AF development, progression, and treatment. Therefore, it is paramount to identify which clinical questions could be successfully addressed by ECGI when it comes to AF characterization and treatment, and which questions may be beyond its technical limitations. In this manuscript we review the questions that researchers have tried to address on the use of ECGI for AF characterization and treatment guidance (for example, localization of AF triggers and sustaining mechanisms), and we discuss the technological requirements and validation. We address experimental and clinical results, limitations, and future challenges for fruitful application of ECGI for AF understanding and management. We pay attention to existing techniques and clinical application, to computer models and (animal or human) experiments, to challenges of methodological and clinical validation. The overall objective of the study is to provide a consensus on valuable directions that ECGI research may take to provide future improvements in AF characterization and treatment guidance.
Collapse
Affiliation(s)
- João Salinet
- Biomedical Engineering, Centre for Engineering, Modelling and Applied Social Sciences (CECS), Federal University of ABC, São Bernardo do Campo, Brazil
| | - Rubén Molero
- ITACA Institute, Universitat Politècnica de València, València, Spain
| | - Fernando S. Schlindwein
- School of Engineering, University of Leicester, United Kingdom and National Institute for Health Research, Leicester Cardiovascular Biomedical Research Centre, Glenfield Hospital, Leicester, United Kingdom
| | - Joël Karel
- Department of Data Science and Knowledge Engineering, Maastricht University, Maastricht, Netherlands
| | - Miguel Rodrigo
- Electronic Engineering Department, Universitat de València, València, Spain
| | - José Luis Rojo-Álvarez
- Department of Signal Theory and Communications and Telematic Systems and Computation, University Rey Juan Carlos, Madrid, Spain
| | - Omer Berenfeld
- Center for Arrhythmia Research, University of Michigan, Ann Arbor, MI, United States
| | - Andreu M. Climent
- ITACA Institute, Universitat Politècnica de València, València, Spain
| | - Brian Zenger
- Biomedical Engineering Department, Scientific Computing and Imaging Institute (SCI), and Cardiovascular Research and Training Institute (CVRTI), The University of Utah, Salt Lake City, UT, United States
| | - Frederique Vanheusden
- Department of Engineering, School of Science and Technology, Nottingham Trent University, Nottingham, United Kingdom
| | - Jimena Gabriela Siles Paredes
- Biomedical Engineering, Centre for Engineering, Modelling and Applied Social Sciences (CECS), Federal University of ABC, São Bernardo do Campo, Brazil
| | - Rob MacLeod
- Biomedical Engineering Department, Scientific Computing and Imaging Institute (SCI), and Cardiovascular Research and Training Institute (CVRTI), The University of Utah, Salt Lake City, UT, United States
| | - Felipe Atienza
- Cardiology Department, Hospital General Universitario Gregorio Marañón, Instituto de Investigación Sanitaria Gregorio Marañón, and Facultad de Medicina, Universidad Complutense de Madrid, Madrid, Spain
| | - María S. Guillem
- ITACA Institute, Universitat Politècnica de València, València, Spain
| | - Matthijs Cluitmans
- Department of Cardiology, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, Netherlands
| | - Pietro Bonizzi
- Department of Data Science and Knowledge Engineering, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
13
|
Zhang T, Pled F, Desceliers C. Robust Multiscale Identification of Apparent Elastic Properties at Mesoscale for Random Heterogeneous Materials with Multiscale Field Measurements. MATERIALS (BASEL, SWITZERLAND) 2020; 13:E2826. [PMID: 32586015 PMCID: PMC7345255 DOI: 10.3390/ma13122826] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 06/16/2020] [Accepted: 06/17/2020] [Indexed: 11/27/2022]
Abstract
The aim of this work is to efficiently and robustly solve the statistical inverse problem related to the identification of the elastic properties at both macroscopic and mesoscopic scales of heterogeneous anisotropic materials with a complex microstructure that usually cannot be properly described in terms of their mechanical constituents at microscale. Within the context of linear elasticity theory, the apparent elasticity tensor field at a given mesoscale is modeled by a prior non-Gaussian tensor-valued random field. A general methodology using multiscale displacement field measurements simultaneously made at both macroscale and mesoscale has been recently proposed for the identification the hyperparameters of such a prior stochastic model by solving a multiscale statistical inverse problem using a stochastic computational model and some information from displacement fields at both macroscale and mesoscale. This paper contributes to the improvement of the computational efficiency, accuracy and robustness of such a method by introducing (i) a mesoscopic numerical indicator related to the spatial correlation length(s) of kinematic fields, allowing the time-consuming global optimization algorithm (genetic algorithm) used in a previous work to be replaced with a more efficient algorithm and (ii) an ad hoc stochastic representation of the hyperparameters involved in the prior stochastic model in order to enhance both the robustness and the precision of the statistical inverse identification method. Finally, the proposed improved method is first validated on in silico materials within the framework of 2D plane stress and 3D linear elasticity (using multiscale simulated data obtained through numerical computations) and then exemplified on a real heterogeneous biological material (beef cortical bone) within the framework of 2D plane stress linear elasticity (using multiscale experimental data obtained through mechanical testing monitored by digital image correlation).
Collapse
|