1
|
Low frequency signal detection via correlated Ramsey measurements. JOURNAL OF MAGNETIC RESONANCE (SAN DIEGO, CALIF. : 1997) 2024; 363:107691. [PMID: 38776598 DOI: 10.1016/j.jmr.2024.107691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 04/30/2024] [Accepted: 05/03/2024] [Indexed: 05/25/2024]
Abstract
The low frequency region of the spectrum is a challenging regime for quantum probes. We support the idea that, in this regime, performing Ramsey measurements carefully controlling the time at which each measurement is initiated is an excellent signal detection strategy. We use the Fisher information to demonstrate a high quality performance in the low frequency regime, compared to more elaborated measurement sequences, and to optimize the correlated Ramsey sequence according to any given experimental parameters, showing that correlated Ramsey rivals with state-of-the-art protocols, and can even outperform commonly employed sequences such as dynamical decoupling in the detection of low frequency signals. Contrary to typical quantum detection protocols for oscillating signals, which require adjusting the time separation between pulses to match the half period of the target signal, and consequently see their scope limited to signals whose period is shorter than the characteristic decoherence time of the probe, or to those protocols whose target is primarily static signals, the time-tagged correlated Ramsey sequence simultaneously tracks the amplitude and the phase information of the target signal, regardless of its frequency, which crucially permits correlating measurements in post-processing, leading to efficient spectral reconstruction.
Collapse
|
2
|
Cramér-Rao Lower Bound for Magnetic Field Localization around Elementary Structures. SENSORS (BASEL, SWITZERLAND) 2024; 24:2402. [PMID: 38676018 PMCID: PMC11054935 DOI: 10.3390/s24082402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 03/20/2024] [Accepted: 03/23/2024] [Indexed: 04/28/2024]
Abstract
The determination of a mobile terminal's position with high accuracy and ubiquitous coverage is still challenging. Global satellite navigation systems (GNSSs) provide sufficient accuracy in areas with a clear view to the sky. For GNSS-denied environments like indoors, complementary positioning technologies are required. A promising approach is to use the Earth's magnetic field for positioning. In open areas, the Earth's magnetic field is almost homogeneous, which makes it possible to determine the orientation of a mobile device using a compass. In more complex environments like indoors, ferromagnetic materials cause distortions of the Earth's magnetic field. A compass usually fails in such areas. However, these magnetic distortions are location dependent and therefore can be used for positioning. In this paper, we investigate the influence of elementary structures, in particular a sphere and a cylinder, on the achievable accuracy of magnetic positioning methods. In a first step, we analytically calculate the magnetic field around a sphere and a cylinder in an outer homogeneous magnetic field. Assuming a noisy magnetic field sensor, we investigate the achievable positioning accuracy when observing these resulting fields. For our analysis, we calculate the Cramér-Rao lower bound, which is a fundamental lower bound on the variance of an unbiased estimator. The results of our investigations show the dependency of the positioning error variance on the magnetic sensor properties, in particular the sensor noise variance and the material properties, i.e., the relative permeability of the sphere with respect to the cylinder and the location of the sensor relative to the sphere with respect to the cylinder. The insights provided in this work make it possible to evaluate experimental results from a theoretical perspective.
Collapse
|
3
|
Navigation and the efficiency of spatial coding: insights from closed-loop simulations. Brain Struct Funct 2024; 229:577-592. [PMID: 37029811 PMCID: PMC10978723 DOI: 10.1007/s00429-023-02637-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 03/28/2023] [Indexed: 04/09/2023]
Abstract
Spatial learning is critical for survival and its underlying neuronal mechanisms have been studied extensively. These studies have revealed a wealth of information about the neural representations of space, such as place cells and boundary cells. While many studies have focused on how these representations emerge in the brain, their functional role in driving spatial learning and navigation has received much less attention. We extended an existing computational modeling tool-chain to study the functional role of spatial representations using closed-loop simulations of spatial learning. At the heart of the model agent was a spiking neural network that formed a ring attractor. This network received inputs from place and boundary cells and the location of the activity bump in this network was the output. This output determined the movement directions of the agent. We found that the navigation performance depended on the parameters of the place cell input, such as their number, the place field sizes, and peak firing rate, as well as, unsurprisingly, the size of the goal zone. The dependence on the place cell parameters could be accounted for by just a single variable, the overlap index, but this dependence was nonmonotonic. By contrast, performance scaled monotonically with the Fisher information of the place cell population. Our results therefore demonstrate that efficiently encoding spatial information is critical for navigation performance.
Collapse
|
4
|
Item selection methods in multidimensional computerized adaptive testing for forced-choice items using Thurstonian IRT model. Behav Res Methods 2024; 56:600-614. [PMID: 36750522 DOI: 10.3758/s13428-022-02037-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/24/2022] [Indexed: 02/09/2023]
Abstract
Multidimensional computerized adaptive testing for forced-choice items (MFC-CAT) combines the benefits of multidimensional forced-choice (MFC) items and computerized adaptive testing (CAT) in that it eliminates response biases and reduces administration time. Previous studies that explored designs of MFC-CAT only discussed item selection methods based on the Fisher information (FI), which is known to perform unstably at early stages of CAT. This study proposes a set of new item selection methods based on the KL information for MFC-CAT (namely MFC-KI, MFC-KB, and MFC-KLP) based on the Thurstonian IRT (TIRT) model. Three simulation studies, including one based on real data, were conducted to compare the performance of the proposed KL-based item selection methods against the existing FI-based methods in three- and five-dimensional MFC-CAT scenarios with various test lengths and inter-trait correlations. Results demonstrate that the proposed KL-based item selection methods are feasible for MFC-CAT and generate acceptable trait estimation accuracy and uniformity of item pool usage. Among the three proposed methods, MFC-KB and MFC-KLP outperformed the existing FI-based item selection methods and resulted in the most accurate trait estimation and relatively even utilization of the item pool.
Collapse
|
5
|
Fisher and Shannon Functionals for Hyperbolic Diffusion. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1627. [PMID: 38136508 PMCID: PMC10742922 DOI: 10.3390/e25121627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/01/2023] [Accepted: 12/02/2023] [Indexed: 12/24/2023]
Abstract
The complexity measure for the distribution in space-time of a finite-velocity diffusion process is calculated. Numerical results are presented for the calculation of Fisher's information, Shannon's entropy, and the Cramér-Rao inequality, all of which are associated with a positively normalized solution to the telegrapher's equation. In the framework of hyperbolic diffusion, the non-local Fisher's information with the x-parameter is related to the local Fisher's information with the t-parameter. A perturbation theory is presented to calculate Shannon's entropy of the telegrapher's equation at long times, as well as a toy model to describe the system as an attenuated wave in the ballistic regime (short times).
Collapse
|
6
|
Model uncertainty quantification in Cox regression. Biometrics 2023; 79:1726-1736. [PMID: 36607238 DOI: 10.1111/biom.13823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 11/25/2022] [Indexed: 01/07/2023]
Abstract
We consider covariate selection and the ensuing model uncertainty aspects in the context of Cox regression. The perspective we take is probabilistic, and we handle it within a Bayesian framework. One of the critical elements in variable/model selection is choosing a suitable prior for model parameters. Here, we derive the so-called conventional prior approach and propose a comprehensive implementation that results in an automatic procedure. Our simulation studies and real applications show improvements over existing literature. For the sake of reproducibility but also for its intrinsic interest for practitioners, a web application requiring minimum statistical knowledge implements the proposed approach.
Collapse
|
7
|
Fisher Information as General Metrics of Quantum Synchronization. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1116. [PMID: 37628145 PMCID: PMC10453851 DOI: 10.3390/e25081116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/14/2023] [Accepted: 07/24/2023] [Indexed: 08/27/2023]
Abstract
Quantum synchronization has emerged as a crucial phenomenon in quantum nonlinear dynamics with potential applications in quantum information processing. Multiple measures for quantifying quantum synchronization exist. However, there is currently no widely agreed metric that is universally adopted. In this paper, we propose using classical and quantum Fisher information (FI) as alternative metrics to detect and measure quantum synchronization. We establish the connection between FI and quantum synchronization, demonstrating that both classical and quantum FI can be deployed as more general indicators of quantum phase synchronization in some regimes where all other existing measures fail to provide reliable results. We show advantages in FI-based measures, especially in 2-to-1 synchronization. Furthermore, we analyze the impact of noise on the synchronization measures, revealing the robustness and susceptibility of each method in the presence of dissipation and decoherence. Our results open up new avenues for understanding and exploiting quantum synchronization.
Collapse
|
8
|
Genome entropy and network centrality contrast exploration and exploitation in evolution of foodborne pathogens. Phys Biol 2023. [PMID: 37224820 DOI: 10.1088/1478-3975/acd899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Modelling evolution of foodborne pathogens is crucial for mitigation and prevention of outbreaks. We apply network-theoretic and information-theoretic methods to trace evolutionary pathways of Salmonella Typhimurium in New South Wales, Australia, by studying whole genome sequencing surveillance data over a five-year period which included several outbreaks. The study derives both undirected and directed genotype networks based on genetic proximity, and relates the network's structural property (centrality) to its functional property (prevalence). The centrality-prevalence space derived for the undirected network reveals a salient exploration-exploitation distinction across the pathogens, further quantified by the normalised Shannon entropy and the Fisher information of the corresponding shell genome. This distinction is also analysed by tracing the probability density along evolutionary paths in the centrality-prevalence space. We quantify the evolutionary pathways, and show that pathogens exploring the evolutionary search-space during the considered period begin to exploit their environment (their prevalence increases resulting in outbreaks), but eventually encounter a bottleneck formed by epidemic containment measures.
Collapse
|
9
|
Approximate spectral decomposition of Fisher information matrix for simple ReLU networks. Neural Netw 2023; 164:691-706. [PMID: 37262931 DOI: 10.1016/j.neunet.2023.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 02/15/2023] [Accepted: 05/09/2023] [Indexed: 06/03/2023]
Abstract
We argue the Fisher information matrix (FIM) of one hidden layer networks with the ReLU activation function. For a network, let W denote the d×p weight matrix from the d-dimensional input to the hidden layer consisting of p neurons, and v the p-dimensional weight vector from the hidden layer to the scalar output. We focus on the FIM of v, which we denote as I. Under certain conditions, we characterize the first three clusters of eigenvalues and eigenvectors of the FIM. Specifically, we show that the following approximately holds. (1) Since I is non-negative owing to the ReLU, the first eigenvalue is the Perron-Frobenius eigenvalue. (2) For the cluster of the next maximum values, the eigenspace is spanned by the row vectors of W. (3) The direct sum of the eigenspace of the first eigenvalue and that of the third cluster is spanned by the set of all the vectors obtained as the Hadamard product of any pair of the row vectors of W. We confirmed by numerical calculation that the above is approximately correct when the number of hidden nodes is about 10000.
Collapse
|
10
|
Square Root Convexity of Fisher Information along Heat Flow in Dimension Two. ENTROPY (BASEL, SWITZERLAND) 2023; 25:e25040558. [PMID: 37190344 PMCID: PMC10137932 DOI: 10.3390/e25040558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 03/18/2023] [Accepted: 03/21/2023] [Indexed: 05/17/2023]
Abstract
Recently, Ledoux, Nair, and Wang proved that the Fisher information along the heat flow is log-convex in dimension one, that is d2dt2log(I(Xt))≥0 for n=1, where Xt is a random variable with density function satisfying the heat equation. In this paper, we consider the high dimensional case and prove that the Fisher information is square root convex in dimension two, that is d2dt2IX≥0 for n=2. The proof is based on the semidefinite programming approach.
Collapse
|
11
|
Transversality Conditions for Geodesics on the Statistical Manifold of Multivariate Gaussian Distributions. ENTROPY (BASEL, SWITZERLAND) 2022; 24:e24111698. [PMID: 36421552 PMCID: PMC9689761 DOI: 10.3390/e24111698] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/16/2022] [Accepted: 11/18/2022] [Indexed: 05/28/2023]
Abstract
We consider the problem of finding the closest multivariate Gaussian distribution on a constraint surface of all Gaussian distributions to a given distribution. Previous research regarding geodesics on the multivariate Gaussian manifold has focused on finding closed-form, shortest-path distances between two fixed distributions on the manifold, often restricting the parameters to obtain the desired solution. We demonstrate how to employ the techniques of the calculus of variations with a variable endpoint to search for the closest distribution from a family of distributions generated via a constraint set on the parameter manifold. Furthermore, we examine the intermediate distributions along the learned geodesics which provide insight into uncertainty evolution along the paths. Empirical results elucidate our formulations, with visual illustrations concretely exhibiting dynamics of 1D and 2D Gaussian distributions.
Collapse
|
12
|
The information geometry of two-field functional integrals. INFORMATION GEOMETRY 2022; 5:427-492. [PMID: 36447530 PMCID: PMC9700636 DOI: 10.1007/s41884-022-00071-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 09/12/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
Two-field functional integrals (2FFI) are an important class of solution methods for generating functions of dissipative processes, including discrete-state stochastic processes, dissipative dynamical systems, and decohering quantum densities. The stationary trajectories of these integrals describe a conserved current by Liouville's theorem, despite the absence of a conserved kinematic phase space current in the underlying stochastic process. We develop the information geometry of generating functions for discrete-state classical stochastic processes in the Doi-Peliti 2FFI form, and exhibit two quantities conserved along stationary trajectories. One is a Wigner function, familiar as a semiclassical density from quantum-mechanical time-dependent density-matrix methods. The second is an overlap function, between directions of variation in an underlying distribution and those in the directions of relative large-deviation probability that can be used to interrogate the distribution, and expressed as an inner product of vector fields in the Fisher information metric. To give an interpretation to the time invertibility implied by current conservation, we use generating functions to represent importance sampling protocols, and show that the conserved Fisher information is the differential of a sample volume under deformations of the nominal distribution and the likelihood ratio. We derive a pair of dual affine connections particular to Doi-Peliti theory for the way they separate the roles of the nominal distribution and likelihood ratio, distinguishing them from the standard dually-flat connection of Nagaoka and Amari defined on the importance distribution, and show that dual flatness in the affine coordinates of the coherent-state basis captures the special role played by coherent states in Doi-Peliti theory.
Collapse
|
13
|
Transactional Interpretation and the Generalized Poisson Distribution. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1416. [PMID: 37420436 DOI: 10.3390/e24101416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 09/30/2022] [Accepted: 10/03/2022] [Indexed: 07/09/2023]
Abstract
The aim of this paper is to study the quantum-like approach to the description of the market in the context of the principle of minimum Fisher information. We wish to investigate the validity of using squeezed coherent states as market strategies. For this purpose, we focus on the representation of any squeezed coherent state with respect to the basis of the eigenvectors of the observable of market risk. We derive a formula for the probability of being the squeezed coherent state in one of these states. The distribution that we call generalized Poisson establishes the relation between the squeezed coherent states and their description in the language of risk in quantum terms. We provide a formula specifying the total risk of squeezed coherent strategy. Then, we propose a risk of risk concept that is in fact the second central moment of the generalized Poisson distribution. This is an important numerical characterization of squeezed coherent strategies. We provide its interpretations on the basis of the uncertainty relation for time and energy.
Collapse
|
14
|
A Fisher Information-Based Incompatibility Criterion for Quantum Channels. ENTROPY 2022; 24:e24060805. [PMID: 35741526 PMCID: PMC9222584 DOI: 10.3390/e24060805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/06/2022] [Accepted: 06/07/2022] [Indexed: 02/05/2023]
Abstract
We introduce a new incompatibility criterion for quantum channels based on the notion of (quantum) Fisher information. Our construction is based on a similar criterion for quantum measurements put forward by H. Zhu. We then study the power of the incompatibility criterion in different scenarios. First, we prove the first analytical conditions for the incompatibility of two Schur channels. Then, we study the incompatibility structure of a tuple of depolarizing channels, comparing the newly introduced criterion with the known results from asymmetric quantum cloning.
Collapse
|
15
|
Spectral flow cytometric FRET: Towards a hyper dimensional flow cytometry. Cytometry A 2022; 101:468-473. [PMID: 35484961 DOI: 10.1002/cyto.a.24561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 04/11/2022] [Accepted: 04/19/2022] [Indexed: 11/07/2022]
|
16
|
Abstract
Obtaining information from the world is important for survival. The brain, therefore, has special mechanisms to extract as much information as possible from sensory stimuli. Hence, given its importance, the amount of available information may underlie aesthetic values. Such information-based aesthetic values would be significant because they would compete with others to drive decision-making. In this article, we ask, "What is the evidence that amount of information support aesthetic values?" An important concept in the measurement of informational volume is entropy. Research on aesthetic values has thus used Shannon entropy to evaluate the contribution of quantity of information. We review here the concepts of information and aesthetic values, and research on the visual and auditory systems to probe whether the brain uses entropy or other relevant measures, specially, Fisher information, in aesthetic decisions. We conclude that information measures contribute to these decisions in two ways: first, the absolute quantity of information can modulate aesthetic preferences for certain sensory patterns. However, the preference for volume of information is highly individualized, with information-measures competing with organizing principles, such as rhythm and symmetry. In addition, people tend to be resistant to too much entropy, but not necessarily, high amounts of Fisher information. We show that this resistance may stem in part from the distribution of amount of information in natural sensory stimuli. Second, the measurement of entropic-like quantities over time reveal that they can modulate aesthetic decisions by varying degrees of surprise given temporally integrated expectations. We propose that amount of information underpins complex aesthetic values, possibly informing the brain on the allocation of resources or the situational appropriateness of some cognitive models.
Collapse
|
17
|
The Fisher information function and scoring in binary ideal point item response models: a cautionary tale. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2022; 75:182-197. [PMID: 34687451 DOI: 10.1111/bmsp.12254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 08/20/2021] [Indexed: 06/13/2023]
Abstract
This article examines the Fisher information functions, I(θ) , and explores implications for scoring of binary ideal point item response models. These models typically appear to have I(θ) that are bimodal and identically equal to 0 at the ideal point. The article shows that this is an inherent property of ideal point IRT models, which either have this property or are indeterminate and thus violate the likelihood regularity conditions. For some models, the indeterminacy can be resolved, generating an effectively unimodal I(θ) , albeit with violated regularity conditions. In other cases, I(θ) diverges. All reasonable ideal point IRT models exhibit this behaviour. Users should exercise caution when relying on asymptotics, particularly for shorter assessments. Use of simulated plausible values or prediction from a fully Bayesian estimation is recommended for scoring.
Collapse
|
18
|
Covariate Information Number for Feature Screening in Ultrahigh-Dimensional Supervised Problems. J Am Stat Assoc 2022; 117:1516-1529. [PMID: 36172297 PMCID: PMC9512254 DOI: 10.1080/01621459.2020.1864380] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Contemporary high-throughput experimental and surveying techniques give rise to ultrahigh-dimensional supervised problems with sparse signals; that is, a limited number of observations (n), each with a very large number of covariates (p >> n), only a small share of which is truly associated with the response. In these settings, major concerns on computational burden, algorithmic stability, and statistical accuracy call for substantially reducing the feature space by eliminating redundant covariates before the use of any sophisticated statistical analysis. Along the lines of Sure Independence Screening (Fan and Lv, 2008) and other model- and correlation-based feature screening methods, we propose a model-free procedure called Covariate Information Number - Sure Independence Screening (CIS). CIS uses a marginal utility connected to the notion of the traditional Fisher Information, possesses the sure screening property, and is applicable to any type of response (features) with continuous features (response). Simulations and an application to transcriptomic data on rats reveal the comparative strengths of CIS over some popular feature screening methods.
Collapse
|
19
|
Why Do Individuals Seek Information? A Selectionist Perspective. Front Psychol 2021; 12:684544. [PMID: 34867580 PMCID: PMC8639505 DOI: 10.3389/fpsyg.2021.684544] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 10/19/2021] [Indexed: 11/16/2022] Open
Abstract
Several authors have proposed that mechanisms of adaptive behavior, and reinforcement learning in particular, can be explained by an innate tendency of individuals to seek information about the local environment. In this article, I argue that these approaches adhere to an essentialist view of learning that avoids the question why information seeking should be favorable in the first place. I propose a selectionist account of adaptive behavior that explains why individuals behave as if they had a tendency to seek information without resorting to essentialist explanations. I develop my argument using a formal selectionist framework for adaptive behavior, the multilevel model of behavioral selection (MLBS). The MLBS has been introduced recently as a formal theory of behavioral selection that links reinforcement learning to natural selection within a single unified model. I show that the MLBS implies an average gain in information about the availability of reinforcement. Formally, this means that behavior reaches an equilibrium state, if and only if the Fisher information of the conditional probability of reinforcement is maximized. This coincides with a reduction in the randomness of the expected environmental feedback as captured by the information theoretic concept of expected surprise (i.e., entropy). The main result is that behavioral selection maximizes the information about the expected fitness consequences of behavior, which, in turn, minimizes average surprise. In contrast to existing attempts to link adaptive behavior to information theoretic concepts (e.g., the free energy principle), neither information gain nor surprise minimization is treated as a first principle. Instead, the result is formally deduced from the MLBS and therefore constitutes a mathematical property of the more general principle of behavioral selection. Thus, if reinforcement learning is understood as a selection process, there is no need to assume an active agent with an innate tendency to seek information or minimize surprise. Instead, information gain and surprise minimization emerge naturally because it lies in the very nature of selection to produce order from randomness.
Collapse
|
20
|
Epistemic uncertainty quantification in deep learning classification by the Delta method. Neural Netw 2021; 145:164-176. [PMID: 34749029 DOI: 10.1016/j.neunet.2021.10.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 08/18/2021] [Accepted: 10/18/2021] [Indexed: 10/20/2022]
Abstract
The Delta method is a classical procedure for quantifying epistemic uncertainty in statistical models, but its direct application to deep neural networks is prevented by the large number of parameters P. We propose a low cost approximation of the Delta method applicable to L2-regularized deep neural networks based on the top K eigenpairs of the Fisher information matrix. We address efficient computation of full-rank approximate eigendecompositions in terms of the exact inverse Hessian, the inverse outer-products of gradients approximation and the so-called Sandwich estimator. Moreover, we provide bounds on the approximation error for the uncertainty of the predictive class probabilities. We show that when the smallest computed eigenvalue of the Fisher information matrix is near the L2-regularization rate, the approximation error will be close to zero even when K≪P. A demonstration of the methodology is presented using a TensorFlow implementation, and we show that meaningful rankings of images based on predictive uncertainty can be obtained for two LeNet and ResNet-based neural networks using the MNIST and CIFAR-10 datasets. Further, we observe that false positives have on average a higher predictive epistemic uncertainty than true positives. This suggests that there is supplementing information in the uncertainty measure not captured by the classification alone.
Collapse
|
21
|
Transactional Interpretation for the Principle of Minimum Fisher Information. ENTROPY 2021; 23:e23111464. [PMID: 34828162 PMCID: PMC8622043 DOI: 10.3390/e23111464] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 10/29/2021] [Accepted: 11/05/2021] [Indexed: 11/17/2022]
Abstract
The principle of minimum Fisher information states that in the set of acceptable probability distributions characterizing the given system, it is best done by the one that minimizes the corresponding Fisher information. This principle can be applied to transaction processes, the dynamics of which can be interpreted as the market tendency to minimize the information revealed about itself. More information involves higher costs (information is physical). The starting point for our considerations is a description of the market derived from the assumption of minimum Fisher information for a strategy with a fixed financial risk. Strategies of this type that minimize Fisher information overlap with the well-known eigenstates of a the quantum harmonic oscillator. The analytical extension of this field of strategy to the complex vector space (traditional for quantum mechanics) suggests the study of the interference of the oscillator eigenstates in terms of their minimization of Fisher information. It is revealed that the minimum value of Fisher information of the superposition of the two strategies being the ground state and the second excited state of the oscillator, has Fisher information less than the ground state of the oscillator. Similarly, less information is obtained for the system of strategies (the oscillator eigenstates) randomized by the Gibbs distribution. We distinguish two different views on the description of Fisher information. One of them, the classical, is based on the value of Fisher information. The second, we call it transactional, expresses Fisher information from the perspective of the constant risk of market strategies. The orders of the market strategies derived from these two descriptions are different. From a market standpoint, minimizing Fisher information is equivalent to minimizing risk.
Collapse
|
22
|
Measuring Phylogenetic Information of Incomplete Sequence Data. Syst Biol 2021; 71:630-648. [PMID: 34469581 DOI: 10.1093/sysbio/syab073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2021] [Revised: 08/26/2021] [Accepted: 08/27/2021] [Indexed: 11/13/2022] Open
Abstract
Widely used approaches for extracting phylogenetic information from aligned sets of molecular sequences rely upon probabilistic models of nucleotide substitution or amino-acid replacement. The phylogenetic information that can be extracted depends on the number of columns in the sequence alignment and will be decreased when the alignment contains gaps due to insertion or deletion events. Motivated by the measurement of information loss, we suggest assessment of the Effective Sequence Length (ESL) of an aligned data set. The ESL can differ from the actual number of columns in a sequence alignment because of the presence of alignment gaps. Furthermore, the estimation of phylogenetic information is affected by model misspecification. Inevitably, the actual process of molecular evolution differs from the probabilistic models employed to describe this process. This disparity means the amount of phylogenetic information in an actual sequence alignment will differ from the amount in a simulated data set of equal size, which motivated us to develop a new test for model adequacy. Via theory and empirical data analysis, we show how to disentangle the effects of gaps and model misspecification. By comparing the Fisher information of actual and simulated sequences, we identify which alignment sites and tree branches are most affected by gaps and model misspecification.
Collapse
|
23
|
Decoherence, Anti-Decoherence, and Fisher Information. ENTROPY 2021; 23:e23081035. [PMID: 34441175 PMCID: PMC8393807 DOI: 10.3390/e23081035] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 08/09/2021] [Accepted: 08/10/2021] [Indexed: 11/21/2022]
Abstract
In this work, we study quantum decoherence as reflected by the dynamics of a system that accounts for the interaction between matter and a given field. The process is described by an important information geometry tool: Fisher’s information measure (FIM). We find that it appropriately describes this concept, detecting salient details of the quantum–classical changeover (qcc). A good description of the qcc report can thus be obtained; in particular, a clear insight into the role that the uncertainty principle (UP) plays in the pertinent proceedings is presented. Plotting FIM versus a system’s motion invariant related to the UP, one can also visualize how anti-decoherence takes place, as opposed to the decoherence process studied in dozens of papers. In Fisher terms, the qcc can be seen as an order (quantum)–disorder (classical, including chaos) transition.
Collapse
|
24
|
Determining the maximum information gain and optimizing experimental design in neutron reflectometry using the Fisher information. J Appl Crystallogr 2021; 54:1100-1110. [PMID: 34429721 PMCID: PMC8366423 DOI: 10.1107/s160057672100563x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 06/01/2021] [Indexed: 12/02/2022] Open
Abstract
An approach based on the Fisher information (FI) is developed to quantify the maximum information gain and optimal experimental design in neutron reflectometry experiments. In these experiments, the FI can be calculated analytically and used to provide sub-second predictions of parameter uncertainties. This approach can be used to influence real-time decisions about measurement angle, measurement time, contrast choice and other experimental conditions based on parameters of interest. The FI provides a lower bound on parameter estimation uncertainties, and these are shown to decrease with the square root of the measurement time, providing useful information for the planning and scheduling of experimental work. As the FI is computationally inexpensive to calculate, it can be computed repeatedly during the course of an experiment, saving costly beam time by signalling that sufficient data have been obtained or saving experimental data sets by signalling that an experiment needs to continue. The approach's predictions are validated through the introduction of an experiment simulation framework that incorporates instrument-specific incident flux profiles, and through the investigation of measuring the structural properties of a phospholipid bilayer.
Collapse
|
25
|
Information measures and design issues in the study of mortality deceleration: findings for the gamma-Gompertz model. LIFETIME DATA ANALYSIS 2021; 27:333-356. [PMID: 33630224 PMCID: PMC8238756 DOI: 10.1007/s10985-021-09518-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 02/05/2021] [Indexed: 06/12/2023]
Abstract
Mortality deceleration, or the slowing down of death rates at old ages, has been repeatedly investigated, but empirical studies of this phenomenon have produced mixed results. The scarcity of observations at the oldest ages complicates the statistical assessment of mortality deceleration, even in the parsimonious parametric framework of the gamma-Gompertz model considered here. The need for thorough verification of the ages at death can further limit the available data. As logistical constraints may only allow to validate survivors beyond a certain (high) age, samples may be restricted to a certain age range. If we can quantify the effects of the sample size and the age range on the assessment of mortality deceleration, we can make recommendations for study design. For that purpose, we propose applying the concept of the Fisher information and ideas from the theory of optimal design. We compute the Fisher information matrix in the gamma-Gompertz model, and derive information measures for comparing the performance of different study designs. We then discuss interpretations of these measures. The special case in which the frailty variance takes the value of zero and lies on the boundary of the parameter space is given particular attention. The changes in information related to varying sample sizes or age ranges are investigated for specific scenarios. The Fisher information also allows us to study the power of a likelihood ratio test to detect mortality deceleration depending on the study design. We illustrate these methods with a study of mortality among late nineteenth-century French-Canadian birth cohorts.
Collapse
|
26
|
Maximization of Some Types of Information for Unidentified Item Response Models with Guessing Parameters. PSYCHOMETRIKA 2021; 86:544-563. [PMID: 34235621 DOI: 10.1007/s11336-021-09763-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Revised: 03/20/2021] [Accepted: 04/05/2021] [Indexed: 06/13/2023]
Abstract
It is known that a family of fixed-effects item response models with equal discrimination and different guessing parameters has no model identifiability. For this family, some types of information including the Fisher information and a new one are maximized to have model identification. The conditions of monotonicity of these types of information with respect to a tuning parameter are given. In the case of the logistic model with guessing parameters, it is shown that maxima do not exist under some parametrization, where negative lower asymptote can be employed without changing the probabilities of correct responses by examinees.
Collapse
|
27
|
Spherical-Symmetry and Spin Effects on the Uncertainty Measures of Multidimensional Quantum Systems with Central Potentials. ENTROPY (BASEL, SWITZERLAND) 2021; 23:607. [PMID: 34068983 PMCID: PMC8156006 DOI: 10.3390/e23050607] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 01/06/2023]
Abstract
The spreading of the stationary states of the multidimensional single-particle systems with a central potential is quantified by means of Heisenberg-like measures (radial and logarithmic expectation values) and entropy-like quantities (Fisher, Shannon, Rényi) of position and momentum probability densities. Since the potential is assumed to be analytically unknown, these dispersion and information-theoretical measures are given by means of inequality-type relations which are explicitly shown to depend on dimensionality and state's angular hyperquantum numbers. The spherical-symmetry and spin effects on these spreading properties are obtained by use of various integral inequalities (Daubechies-Thakkar, Lieb-Thirring, Redheffer-Weyl, ...) and a variational approach based on the extremization of entropy-like measures. Emphasis is placed on the uncertainty relations, upon which the essential reason of the probabilistic theory of quantum systems relies.
Collapse
|
28
|
Finite-Sample Bounds on the Accuracy of Plug-in Estimators of Fisher Information. ENTROPY 2021; 23:e23050545. [PMID: 33924955 PMCID: PMC8145518 DOI: 10.3390/e23050545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/14/2021] [Accepted: 04/21/2021] [Indexed: 11/16/2022]
Abstract
Finite-sample bounds on the accuracy of Bhattacharya’s plug-in estimator for Fisher information are derived. These bounds are further improved by introducing a clipping step that allows for better control over the score function. This leads to superior upper bounds on the rates of convergence, albeit under slightly different regularity conditions. The performance bounds on both estimators are evaluated for the practically relevant case of a random variable contaminated by Gaussian noise. Moreover, using Brown’s identity, two corresponding estimators of the minimum mean-square error are proposed.
Collapse
|
29
|
Information-Efficient, Off-Center Sampling Results in Improved Precision in 3D Single-Particle Tracking Microscopy. ENTROPY (BASEL, SWITZERLAND) 2021; 23:498. [PMID: 33921987 PMCID: PMC8143542 DOI: 10.3390/e23050498] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 04/19/2021] [Accepted: 04/19/2021] [Indexed: 12/18/2022]
Abstract
In this work, we present a 3D single-particle tracking system that can apply tailored sampling patterns to selectively extract photons that yield the most information for particle localization. We demonstrate that off-center sampling at locations predicted by Fisher information utilizes photons most efficiently. When performing localization in a single dimension, optimized off-center sampling patterns gave doubled precision compared to uniform sampling. A ~20% increase in precision compared to uniform sampling can be achieved when a similar off-center pattern is used in 3D localization. Here, we systematically investigated the photon efficiency of different emission patterns in a diffraction-limited system and achieved higher precision than uniform sampling. The ability to maximize information from the limited number of photons demonstrated here is critical for particle tracking applications in biological samples, where photons may be limited.
Collapse
|
30
|
Discrete Versions of Jensen-Fisher, Fisher and Bayes- Fisher Information Measures of Finite Mixture Distributions. ENTROPY 2021; 23:e23030363. [PMID: 33803766 PMCID: PMC8003337 DOI: 10.3390/e23030363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 03/07/2021] [Accepted: 03/16/2021] [Indexed: 11/16/2022]
Abstract
In this work, we first consider the discrete version of Fisher information measure and then propose Jensen–Fisher information, to develop some associated results. Next, we consider Fisher information and Bayes–Fisher information measures for mixing parameter vector of a finite mixture probability mass function and establish some results. We provide some connections between these measures with some known informational measures such as chi-square divergence, Shannon entropy, Kullback–Leibler, Jeffreys and Jensen–Shannon divergences.
Collapse
|
31
|
Unveiling Informational Properties of the Chen-Ouillon-Sornette Seismo-Electrical Model. ENTROPY 2021; 23:e23030337. [PMID: 33809156 PMCID: PMC8002195 DOI: 10.3390/e23030337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 03/07/2021] [Accepted: 03/10/2021] [Indexed: 11/26/2022]
Abstract
The seismo-electrical coupling is critical to understand the mechanism of geoelectrical precursors to earthquakes. A novel seismo-electrical model, called Chen–Ouillon–Sornette (COS) model, has been developed by combining the Burridge–Knopoff spring-block system with the mechanisms of stress-activated charge carriers (i.e., electrons and holes) and pressure-stimulated currents. Such a model, thus, can simulate fracture-induced electrical signals at a laboratory scale or earthquake-related geoelectrical signals at a geological scale. In this study, by using information measures of time series analysis, we attempt to understand the influence of diverse electrical conditions on the characteristics of the simulated electrical signals with the COS model. We employ the Fisher–Shannon method to investigate the temporal dynamics of the COS model. The result showed that the electrical parameters of the COS model, particularly for the capacitance and inductance, affect the levels of the order/disorder in the electrical time series. Compared to the field observations, we infer that the underground electrical condition has become larger capacitance or smaller inductance in seismogenic processes. Accordingly, this study may provide a better understanding of the mechanical–electrical coupling of the earth’s crust.
Collapse
|
32
|
Fisher Information of Free-Electron Landau States. ENTROPY 2021; 23:e23030268. [PMID: 33668725 PMCID: PMC7996270 DOI: 10.3390/e23030268] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 02/18/2021] [Accepted: 02/22/2021] [Indexed: 01/04/2023]
Abstract
An electron in a constant magnetic field has energy levels, known as the Landau levels. One can obtain the corresponding radial wavefunction of free-electron Landau states in cylindrical polar coordinates. However, this system has not been explored so far in terms of an information-theoretical viewpoint. Here, we focus on Fisher information associated with these Landau states specified by the two quantum numbers. Fisher information provides a useful measure of the electronic structure in quantum systems, such as hydrogen-like atoms and under some potentials. By numerically evaluating the generalized Laguerre polynomials in the radial densities, we report that Fisher information increases linearly with the principal quantum number that specifies energy levels, but decreases monotonically with the azimuthal quantum number m. We also present relative Fisher information of the Landau states against the reference density with m=0, which is proportional to the principal quantum number. We compare it with the case when the lowest Landau level state is set as the reference.
Collapse
|
33
|
A Unified Approach to Local Quantum Uncertainty and Interferometric Power by Metric Adjusted Skew Information. ENTROPY 2021; 23:e23030263. [PMID: 33668150 PMCID: PMC7995958 DOI: 10.3390/e23030263] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 02/19/2021] [Accepted: 02/20/2021] [Indexed: 11/30/2022]
Abstract
Local quantum uncertainty and interferometric power were introduced by Girolami et al. as geometric quantifiers of quantum correlations. The aim of the present paper is to discuss their properties in a unified manner by means of the metric adjusted skew information defined by Hansen.
Collapse
|
34
|
Systematic bias in studies of consumer functional responses. Ecol Lett 2021; 24:580-593. [PMID: 33381898 DOI: 10.1111/ele.13660] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 11/09/2020] [Accepted: 11/18/2020] [Indexed: 12/31/2022]
Abstract
Functional responses are a cornerstone to our understanding of consumer-resource interactions, so how to best describe them using models has been actively debated. Here we focus on the consumer dependence of functional responses to evidence systematic bias in the statistical comparison of functional-response models and the estimation of their parameters. Both forms of bias are universal to nonlinear models (irrespective of consumer dependence) and are rooted in a lack of sufficient replication. Using a large compilation of published datasets, we show that - due to the prevalence of low sample size studies - neither the overall frequency by which alternative models achieve top rank nor the frequency distribution of parameter point estimates should be treated as providing insight into the general form or central tendency of consumer interference. We call for renewed clarity in the varied purposes that motivate the study of functional responses, purposes that can compete with each other in dictating the design, analysis and interpretation of functional-response experiments.
Collapse
|
35
|
Eigensolution techniques, expectation values and Fisher information of Wei potential function. J Mol Model 2020; 26:311. [PMID: 33089429 DOI: 10.1007/s00894-020-04573-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 10/15/2020] [Indexed: 11/30/2022]
Abstract
An approximate solution of the one-dimensional relativistic Klein-Gordon equation was obtained under the interaction of an improved expression for Wei potential energy function. The solution of the non-relativistic Schrödinger equation was obtained from the solution of the relativistic Klein-Gordon equation by certain mappings. We have calculated Fisher information for position space and momentum space via the computation of expectation values. The effects of some parameters of the Wei potential energy function on the Fisher information were fully examined graphically. We have also examined the effects of the quantum number n and the angular momentum quantum number ℓ on the expectation values and Fisher information respectively for some selected molecules. Our results revealed that the variation of most of the parameters of the Wei potential energy function against the Fisher information does not obey the Heisenberg uncertainty relation for Fisher information while that of the quantum number and angular momentum quantum number on Fisher information obeyed the relation.
Collapse
|
36
|
Application of OU processes to modelling temporal dynamics of the human microbiome, and calculating optimal sampling schemes. BMC Bioinformatics 2020; 21:450. [PMID: 33045987 PMCID: PMC7549249 DOI: 10.1186/s12859-020-03747-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Accepted: 09/11/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The vast majority of microbiome research so far has focused on the structure of the microbiome at a single time-point. There have been several studies that measure the microbiome from a particular environment over time. A few models have been developed by extending time series models to accomodate specific features in microbiome data to address questions of stability and interactions of the microbime time series. Most research has observed the stability and mean reversion for some microbiomes. However, little has been done to study the mean reversion rates of these stable microbes and how sampling frequencies are related to such conclusions. In this paper, we begin to rectify this situation. We analyse two widely studied microbial time series data sets on four healthy individuals. We choose to study healthy individuals because we are interested in the baseline temporal dynamics of the microbiome. RESULTS For this analysis, we focus on the temporal dynamics of individual genera, absorbing all interactions in a stochastic term. We use a simple stochastic differential equation model to assess the following three questions. (1) Does the microbiome exhibit temporal continuity? (2) Does the microbiome have a stable state? (3) To better understand the temporal dynamics, how frequently should data be sampled in future studies? We find that a simple Ornstein-Uhlenbeck model which incorporates both temporal continuity and reversion to a stable state fits the data for almost every genus better than a Brownian motion model that contains only temporal continuity. The Ornstein-Uhlenbeck model also fits the data better than modelling separate time points as independent. Under the Ornstein-Uhlenbeck model, we calculate the variance of the estimated mean reversion rate (the speed with which each genus returns to its stable state). Based on this calculation, we are able to determine the optimal sample schemes for studying temporal dynamics. CONCLUSIONS There is evidence of temporal continuity for most genera; there is clear evidence of a stable state; and the optimal sampling frequency for studying temporal dynamics is in the range of one sample every 0.8-3.2 days.
Collapse
|
37
|
Fisher information analysis of list-mode SPECT emission data for joint estimation of activity and attenuation distribution. INVERSE PROBLEMS 2020; 36:084002. [PMID: 33071423 PMCID: PMC7561050 DOI: 10.1088/1361-6420/ab958b] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
The potential to perform attenuation and scatter compensation (ASC) in single-photon emission computed tomography (SPECT) imaging without a separate transmission scan is highly significant. In this context, attenuation in SPECT is primarily due to Compton scattering, where the probability of Compton scatter is proportional to the attenuation coefficient of the tissue and the energy of the scattered photon and the scattering angle are related. Based on this premise, we investigated whether the SPECT scattered-photon data acquired in list-mode (LM) format and including the energy information can be used to estimate the attenuation map. For this purpose, we propose a Fisher-information-based method that yields the Cramer-Rao bound (CRB) for the task of jointly estimating the activity and attenuation distribution using only the SPECT emission data. In the process, a path-based formalism to process the LM SPECT emission data, including the scattered-photon data, is proposed. The Fisher information method was implemented on NVIDIA graphics processing units (GPU) for acceleration. The method was applied to analyze the information content of SPECT LM emission data, which contains up to first-order scattered events, in a simulated SPECT system with parameters modeling a clinical system using realistic computational studies with 2-D digital synthetic and anthropomorphic phantoms. The method was also applied to LM data containing up to second-order scatter for a synthetic phantom. Experiments with anthropomorphic phantoms simulated myocardial perfusion and dopamine transporter (DaT)-Scan SPECT studies. The results show that the CRB obtained for the attenuation and activity coefficients was typically much lower than the true value of these coefficients. An increase in the number of detected photons yielded lower CRB for both the attenuation and activity coefficients. Further, we observed that systems with better energy resolution yielded a lower CRB for the attenuation coefficient. Overall, the results provide evidence that LM SPECT emission data, including the scattered photons, contains information to jointly estimate the activity and attenuation coefficients.
Collapse
|
38
|
Abstract
A social system is susceptible to perturbation when its collective properties depend sensitively on a few pivotal components. Using the information geometry of minimal models from statistical physics, we develop an approach to identify pivotal components to which coarse-grained, or aggregate, properties are sensitive. As an example, we introduce our approach on a reduced toy model with a median voter who always votes in the majority. The sensitivity of majority-minority divisions to changing voter behaviour pinpoints the unique role of the median. More generally, the sensitivity identifies pivotal components that precisely determine collective outcomes generated by a complex network of interactions. Using perturbations to target pivotal components in the models, we analyse datasets from political voting, finance and Twitter. Across these systems, we find remarkable variety, from systems dominated by a median-like component to those whose components behave more equally. In the context of political institutions such as courts or legislatures, our methodology can help describe how changes in voters map to new collective voting outcomes. For economic indices, differing system response reflects varying fiscal conditions across time. Thus, our information-geometric approach provides a principled, quantitative framework that may help assess the robustness of collective outcomes to targeted perturbation and compare social institutions, or even biological networks, with one another and across time.
Collapse
|
39
|
Source shot noise mitigation in focused ion beam microscopy by time-resolved measurement. Ultramicroscopy 2020; 211:112948. [PMID: 32171978 DOI: 10.1016/j.ultramic.2020.112948] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 12/23/2019] [Accepted: 01/26/2020] [Indexed: 11/18/2022]
Abstract
Focused ion beam microscopy suffers from source shot noise - random variation in the number of incident ions in any fixed dwell time - along with random variation in the number of detected secondary electrons per incident ion. This multiplicity of sources of randomness increases the variance of the measurements and thus worsens the trade-off between incident ion dose and image accuracy. Repeated measurement with low dwell time, without changing the total ion dose, is a way to introduce time resolution to this form of microscopy. Through theoretical analyses and Monte Carlo simulations, we show that three ways to process time-resolved measurements result in mean-squared error (MSE) improvements compared to the conventional method of having no time resolution. In particular, maximum likelihood estimation provides reduction in MSE or reduction in required dose by a multiplicative factor approximately equal to the secondary electron yield. This improvement factor is similar to complete mitigation of source shot noise. Experiments with a helium ion microscope are consistent with the analyses and suggest accuracy improvement for a fixed source dose by a factor of about 4.
Collapse
|
40
|
Towards unconstrained compartment modeling in white matter using diffusion-relaxation MRI with tensor-valued diffusion encoding. Magn Reson Med 2020; 84:1605-1623. [PMID: 32141131 DOI: 10.1002/mrm.28216] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 01/27/2020] [Accepted: 01/28/2020] [Indexed: 01/05/2023]
Abstract
PURPOSE To optimize diffusion-relaxation MRI with tensor-valued diffusion encoding for precise estimation of compartment-specific fractions, diffusivities, and T2 values within a two-compartment model of white matter, and to explore the approach in vivo. METHODS Sampling protocols featuring different b-values (b), b-tensor shapes (bΔ ), and echo times (TE) were optimized using Cramér-Rao lower bounds (CRLB). Whole-brain data were acquired in children, adults, and elderly with white matter lesions. Compartment fractions, diffusivities, and T2 values were estimated in a model featuring two microstructural compartments represented by a "stick" and a "zeppelin." RESULTS Precise parameter estimates were enabled by sampling protocols featuring seven or more "shells" with unique b/bΔ /TE-combinations. Acquisition times were approximately 15 minutes. In white matter of adults, the "stick" compartment had a fraction of approximately 0.5 and, compared with the "zeppelin" compartment, featured lower isotropic diffusivities (0.6 vs. 1.3 μm2 /ms) but higher T2 values (85 vs. 65 ms). Children featured lower "stick" fractions (0.4). White matter lesions exhibited high "zeppelin" isotropic diffusivities (1.7 μm2 /ms) and T2 values (150 ms). CONCLUSIONS Diffusion-relaxation MRI with tensor-valued diffusion encoding expands the set of microstructure parameters that can be precisely estimated and therefore increases their specificity to biological quantities.
Collapse
|
41
|
A novel quantification of information for longitudinal data analyzed by mixed-effects modeling. Pharm Stat 2020; 19:388-398. [PMID: 31989784 DOI: 10.1002/pst.1996] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 11/24/2019] [Accepted: 11/27/2019] [Indexed: 12/11/2022]
Abstract
Nonlinear mixed-effects (NLME) modeling is one of the most powerful tools for analyzing longitudinal data especially under the sparse sampling design. The determinant of the Fisher information matrix is a commonly used global metric of the information that can be provided by the data under a given model. However, in clinical studies, it is also important to measure how much information the data provide for a certain parameter of interest under the assumed model, for example, the clearance in population pharmacokinetic models. This paper proposes a new, easy-to-interpret information metric, the "relative information" (RI), which is designed for specific parameters of a model and takes a value between 0% and 100%. We establish the relationship between interindividual variability for a specific parameter and the variance of the associated parameter estimator, demonstrating that, under a "perfect" experiment (eg, infinite samples or/and minimum experimental error), the RI and the variance of the model parameter estimator converge, respectively, to 100% and the ratio of the interindividual variability for that parameter and the number of subjects. Extensive simulation experiments and analyses of three real datasets show that our proposed RI metric can accurately characterize the information for parameters of interest for NLME models. The new information metric can be readily used to facilitate study designs and model diagnosis.
Collapse
|
42
|
Experimental design for parameter estimation in steady-state linear models of metabolic networks. Math Biosci 2019; 319:108291. [PMID: 31786081 DOI: 10.1016/j.mbs.2019.108291] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 11/20/2019] [Accepted: 11/20/2019] [Indexed: 12/18/2022]
Abstract
Metabolic networks are typically large, containing many metabolites and reactions. Dynamical models that aim to simulate such networks will consist of a large number of ordinary differential equations, with many kinetic parameters that must be estimated from experimental data. We assume these data to be metabolomics measurements made under steady-state conditions for different input fluxes. Assuming linear kinetics, analytical criteria for parameter identifiability are provided. For normally distributed error terms, we also calculate the Fisher information matrix analytically to be used in the D-optimality criterion. A test network illustrates the developed tool chain for finding an optimal experimental design. The first stage is to verify global or pointwise parameter identifiability, the second stage to find optimal input fluxes, and finally remove redundant measurements.
Collapse
|
43
|
Optimal experimental design for predator-prey functional response experiments. J R Soc Interface 2019; 15:rsif.2018.0186. [PMID: 30021925 DOI: 10.1098/rsif.2018.0186] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Accepted: 06/25/2018] [Indexed: 11/12/2022] Open
Abstract
Functional response models are important in understanding predator-prey interactions. The development of functional response methodology has progressed from mechanistic models to more statistically motivated models that can account for variance and the over-dispersion commonly seen in the datasets collected from functional response experiments. However, little information seems to be available for those wishing to prepare optimal parameter estimation designs for functional response experiments. It is worth noting that optimally designed experiments may require smaller sample sizes to achieve the same statistical outcomes as non-optimally designed experiments. In this paper, we develop a model-based approach to optimal experimental design for functional response experiments in the presence of parameter uncertainty (also known as a robust optimal design approach). Further, we develop and compare new utility functions which better focus on the statistical efficiency of the designs; these utilities are generally applicable for robust optimal design in other applications (not just in functional response). The methods are illustrated using a beta-binomial functional response model for two published datasets: an experiment involving the freshwater predator Notonecta glauca (an aquatic insect) preying on Asellus aquaticus (a small crustacean), and another experiment involving a ladybird beetle (Propylea quatuordecimpunctata L.) preying on the black bean aphid (Aphis fabae Scopoli). As a by-product, we also derive necessary quantities to perform optimal design for beta-binomial regression models, which may be useful in other applications.
Collapse
|
44
|
Some Problems With the Analytical Argument in Support of RP67 in the Context of the Bookmark Standard Setting Method. APPLIED PSYCHOLOGICAL MEASUREMENT 2019; 43:481-492. [PMID: 31452556 PMCID: PMC6696871 DOI: 10.1177/0146621618800272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The choice of response probability in the bookmark method has been shown to affect outcomes in important ways. These findings have implications for the validity of the bookmark method because panelists' inability to internally adjust when given different response probabilities suggests that they are not performing the intended judgment task. In response to the concerns these findings raise, proponents of the bookmark method argue that such concerns can be addressed by using a response probability of .67. A crucial part of their argument includes the often-repeated claim that the .67 value corresponds with the maximum information for a correct response, which is believed to be beneficial in some way. In this article, it is shown that this claim is mistaken; that the formula upon which the .67 result is based is incorrect; that (for the relevant measurement model) there is no difference between the information for a correct response, for an incorrect response, or for the item overall; and, more generally, that the "maximize information" approach is based on the wrong likelihood function altogether.
Collapse
|
45
|
Comparing Information Metrics for a Coupled Ornstein-Uhlenbeck Process. ENTROPY 2019; 21:e21080775. [PMID: 33267488 PMCID: PMC7515303 DOI: 10.3390/e21080775] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 07/30/2019] [Accepted: 08/06/2019] [Indexed: 11/24/2022]
Abstract
It is often the case when studying complex dynamical systems that a statistical formulation can provide the greatest insight into the underlying dynamics. When discussing the behavior of such a system which is evolving in time, it is useful to have the notion of a metric between two given states. A popular measure of information change in a system under perturbation has been the relative entropy of the states, as this notion allows us to quantify the difference between states of a system at different times. In this paper, we investigate the relaxation problem given by a single and coupled Ornstein–Uhlenbeck (O-U) process and compare the information length with entropy-based metrics (relative entropy, Jensen divergence) as well as others. By measuring the total information length in the long time limit, we show that it is only the information length that preserves the linear geometry of the O-U process. In the coupled O-U process, the information length is shown to be capable of detecting changes in both components of the system even when other metrics would detect almost nothing in one of the components. We show in detail that the information length is sensitive to the evolution of subsystems.
Collapse
|
46
|
Limiting Uncertainty Relations in Laser-Based Measurements of Position and Velocity Due to Quantum Shot Noise. ENTROPY 2019; 21:e21030264. [PMID: 33266979 PMCID: PMC7514745 DOI: 10.3390/e21030264] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 03/02/2019] [Accepted: 03/05/2019] [Indexed: 12/29/2022]
Abstract
With the ongoing progress of optoelectronic components, laser-based measurement systems allow measurements of position as well as displacement, strain and velocity with unbeatable speed and low measurement uncertainty. The performance limit is often studied for a single measurement setup, but a fundamental comparison of different measurement principles with respect to the ultimate limit due to quantum shot noise is rare. For this purpose, the Cramér-Rao bound is described as a universal information theoretic tool to calculate the minimal achievable measurement uncertainty for different measurement techniques, and a review of the respective lower bounds for laser-based measurements of position, displacement, strain and velocity at particles and surfaces is presented. As a result, the calculated Cramér-Rao bounds of different measurement principles have similar forms for each measurand including an indirect proportionality with respect to the number of photons and, in case of the position measurement for instance, the wave number squared. Furthermore, an uncertainty principle between the position uncertainty and the wave vector uncertainty was identified, i.e., the measurement uncertainty is minimized by maximizing the wave vector uncertainty. Additionally, physically complementary measurement approaches such as interferometry and time-of-flight positions measurements as well as time-of-flight and Doppler particle velocity measurements are shown to attain the same fundamental limit. Since most of the laser-based measurements perform similar with respect to the quantum shot noise, the realized measurement systems behave differently only due to the available optoelectronic components for the concrete measurement task.
Collapse
|
47
|
Information Geometric Duality of ϕ-Deformed Exponential Families. ENTROPY 2019; 21:e21020112. [PMID: 33266828 PMCID: PMC7514596 DOI: 10.3390/e21020112] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Revised: 01/11/2019] [Accepted: 01/16/2019] [Indexed: 11/16/2022]
Abstract
In the world of generalized entropies—which, for example, play a role in physical systems with sub- and super-exponential phase space growth per degree of freedom—there are two ways for implementing constraints in the maximum entropy principle: linear and escort constraints. Both appear naturally in different contexts. Linear constraints appear, e.g., in physical systems, when additional information about the system is available through higher moments. Escort distributions appear naturally in the context of multifractals and information geometry. It was shown recently that there exists a fundamental duality that relates both approaches on the basis of the corresponding deformed logarithms (deformed-log duality). Here, we show that there exists another duality that arises in the context of information geometry, relating the Fisher information of ϕ-deformed exponential families that correspond to linear constraints (as studied by J.Naudts) to those that are based on escort constraints (as studied by S.-I. Amari). We explicitly demonstrate this information geometric duality for the case of (c,d)-entropy, which covers all situations that are compatible with the first three Shannon–Khinchin axioms and that include Shannon, Tsallis, Anteneodo–Plastino entropy, and many more as special cases. Finally, we discuss the relation between the deformed-log duality and the information geometric duality and mention that the escort distributions arising in these two dualities are generally different and only coincide for the case of the Tsallis deformation.
Collapse
|
48
|
Thermodynamic efficiency of contagions: a statistical mechanical analysis of the SIS epidemic model. Interface Focus 2018; 8:20180036. [PMID: 30443333 PMCID: PMC6227806 DOI: 10.1098/rsfs.2018.0036] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/10/2018] [Indexed: 01/23/2023] Open
Abstract
We present a novel approach to the study of epidemics on networks as thermodynamic phenomena, quantifying the thermodynamic efficiency of contagions, considered as distributed computational processes. Modelling SIS dynamics on a contact network statistical-mechanically, we follow the maximum entropy (MaxEnt) principle to obtain steady-state distributions and derive, under certain assumptions, relevant thermodynamic quantities both analytically and numerically. In particular, we obtain closed-form solutions for some cases, while interpreting key epidemic variables, such as the reproductive ratio of a SIS model, in a statistical mechanical setting. On the other hand, we consider configuration and free entropy, as well as the Fisher information, in the epidemiological context. This allowed us to identify criticality and distinct phases of epidemic processes. For each of the considered thermodynamic quantities, we compare the analytical solutions informed by the MaxEnt principle with the numerical estimates for SIS epidemics simulated on Watts-Strogatz random graphs.
Collapse
|
49
|
Item Selection Methods in Multidimensional Computerized Adaptive Testing With Polytomously Scored Items. APPLIED PSYCHOLOGICAL MEASUREMENT 2018; 42:677-694. [PMID: 30559574 PMCID: PMC6291894 DOI: 10.1177/0146621618762748] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Multidimensional computerized adaptive testing (MCAT) has been developed over the past decades, and most of them can only deal with dichotomously scored items. However, polytomously scored items have been broadly used in a variety of tests for their advantages of providing more information and testing complicated abilities and skills. The purpose of this study is to discuss the item selection algorithms used in MCAT with polytomously scored items (PMCAT). Several promising item selection algorithms used in MCAT are extended to PMCAT, and two new item selection methods are proposed to improve the existing selection strategies. Two simulation studies are conducted to demonstrate the feasibility of the extended and proposed methods. The simulation results show that most of the extended item selection methods for PMCAT are feasible and the new proposed item selection methods perform well. Combined with the security of the pool, when two dimensions are considered (Study 1), the proposed modified continuous entropy method (MCEM) is the ideal of all in that it gains the lowest item exposure rate and has a relatively high accuracy. As for high dimensions (Study 2), results show that mutual information (MUI) and MCEM keep relatively high estimation accuracy, and the item exposure rates decrease as the correlation increases.
Collapse
|
50
|
Abstract
Background The odds ratio (OR) is used as an important metric of comparison of two or more groups in many biomedical applications when the data measure the presence or absence of an event or represent the frequency of its occurrence. In the latter case, researchers often dichotomize the count data into binary form and apply the well-known logistic regression technique to estimate the OR. In the process of dichotomizing the data, however, information is lost about the underlying counts which can reduce the precision of inferences on the OR. Methods We propose analyzing the count data directly using regression models with the log odds link function. With this approach, the parameter estimates in the model have the exact same interpretation as in a logistic regression of the dichotomized data, yielding comparable estimates of the OR. We prove analytically, using the Fisher information matrix, that our approach produces more precise estimates of the OR than logistic regression of the dichotomized data. We also show the gains in precision using simulation studies and real-world datasets. We focus on three related distributions for count data: geometric, Poisson, and negative binomial. Results In simulation studies, confidence intervals for the OR were 56–65% as wide (geometric model), 75–79% as wide (Poisson model), and 61–69% as wide (negative binomial model) as the corresponding interval from a logistic regression produced by dichotomizing the data. When we analyzed existing datasets using our approach, we found that confidence intervals for the OR could be up to 64% shorter (36% as wide) compared to if the data had been dichotomized and analyzed using logistic regression. Conclusions More precise estimates of the OR can be obtained directly from the count data by using the log odds link function. This analytic approach is easy to implement in software packages that are capable of fitting generalized linear models or of maximizing user-defined likelihood functions. Electronic supplementary material The online version of this article (10.1186/s12874-018-0568-9) contains supplementary material, which is available to authorized users.
Collapse
|