1
|
Modelling the lactation curve in Alpine × Beetal crossbred dairy goats using random regression models fitted with Legendre polynomial and B-spline functions. J Anim Breed Genet 2024. [PMID: 38217261 DOI: 10.1111/jbg.12849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 11/30/2023] [Accepted: 01/03/2024] [Indexed: 01/15/2024]
Abstract
The current study sought to genetically assess the lactation curve of Alpine × Beetal crossbred goats through the application of random regression models (RRM). The objective was to estimate genetic parameters of the first lactation test-day milk yield (TDMY) for devising a practical breeding strategy within the nucleus breeding programme. In order to model variations in lactation curves, 25,998 TDMY records were used in this study. For the purpose of estimating genetic parameters, orthogonal Legendre polynomials (LEG) and B-splines (BS) were examined in order to generate suitable and parsimonious models. A single-trait RRM technique was used for the analysis. The average first lactation TDMY was 1.22 ± 0.03 kg and peak yield (1.35 ± 0.02 kg) was achieved around the 7th test day (TD). The present investigation has demonstrated the superiority of the B-spline model for the genetic evaluation of Alpine × Beetal dairy goats. The optimal random regression model was identified as a quadratic B-spline function, characterized by six knots to represent the central trend. This model effectively captured the patterns of additive genetic influences, animal-specific permanent environmental effects (c2 ) and 22 distinct classes of (heterogeneous) residual variance. Additive variances and heritability (h2 ) estimates were lower in the early lactation, however, moderate across most parts of the lactation studied, ranging from 0.09 ± 0.04 to 0.33 ± 0.06. The moderate heritability estimates indicate the potential for selection using favourable combinations of test days throughout the lactation period. It was also observed that a high proportion of total variance was attributed to the animal's permanent environment. Positive genetic correlations were observed for adjacent TDMY values, while the correlations became less pronounced for more distant TDMY values. Considering better fitting of the lactation curve, the use of B-spline functions for genetic evaluation of Alpine × Beetal goats using RRM is recommended.
Collapse
|
2
|
An Obstacle Avoidance Path Planning and Evaluation Method for Intelligent Vehicles Based on the B-Spline Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:8151. [PMID: 37836981 PMCID: PMC10574840 DOI: 10.3390/s23198151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/17/2023] [Accepted: 09/26/2023] [Indexed: 10/15/2023]
Abstract
To meet the real-time path planning requirements of intelligent vehicles in dynamic traffic scenarios, a path planning and evaluation method is proposed in this paper. Firstly, based on the B-spline algorithm and four-stage lane-changing theory, an obstacle avoidance path planning algorithm framework is constructed. Then, to obtain the optimal real-time path, a comprehensive real-time path evaluation mechanism that includes path safety, smoothness, and comfort is established. Finally, to verify the proposed approach, co-simulation and real vehicle testing are conducted. In the dynamic obstacle avoidance scenario simulation, the lateral acceleration, yaw angle, yaw rate, and roll angle fluctuation ranges of the ego-vehicle are ±2.39 m/s2, ±13.31°, ±13.26°/s, and ±0.938°, respectively. The results show that the proposed algorithm can generate real-time, available obstacle avoidance paths. And the proposed evaluation mechanism can find the optimal path for the current scenario.
Collapse
|
3
|
Semi-Empirical Pseudopotential Method for Graphene and Graphene Nanoribbons. NANOMATERIALS (BASEL, SWITZERLAND) 2023; 13:2066. [PMID: 37513077 PMCID: PMC10383570 DOI: 10.3390/nano13142066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/06/2023] [Accepted: 07/11/2023] [Indexed: 07/30/2023]
Abstract
We implemented a semi-empirical pseudopotential (SEP) method for calculating the band structures of graphene and graphene nanoribbons. The basis functions adopted are two-dimensional plane waves multiplied by several B-spline functions along the perpendicular direction. The SEP includes both local and non-local terms, which were parametrized to fit relevant quantities obtained from the first-principles calculations based on the density-functional theory (DFT). With only a handful of parameters, we were able to reproduce the full band structure of graphene obtained by DFT with a negligible difference. Our method is simple to use and much more efficient than the DFT calculation. We then applied this SEP method to calculate the band structures of graphene nanoribbons. By adding a simple correction term to the local pseudopotentials on the edges of the nanoribbon (which mimics the effect caused by edge creation), we again obtained band structures of the armchair nanoribbon fairly close to the results obtained by DFT. Our approach allows the simulation of optical and transport properties of realistic nanodevices made of graphene nanoribbons with very little computation effort.
Collapse
|
4
|
Risk measurement of China's green financial market based on B-spline quantile regression. Heliyon 2023; 9:e16794. [PMID: 37313159 PMCID: PMC10258423 DOI: 10.1016/j.heliyon.2023.e16794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 05/25/2023] [Accepted: 05/29/2023] [Indexed: 06/15/2023] Open
Abstract
To accurately measure the spillover effect of China's green financial carbon emission market, a new measurement of conditional value at risk (CoVaR) based on the B-spline quantile methods is proposed. Firstly, the variable coefficient CoVaR model is constructed, and the model coefficients are estimated by the B-spline quantile method. Then, the relationship between Δconditional value at risk (ΔCoVaR) and value at risk (VaR) is considered. In the empirical analysis, we investigate five carbon trading quota risk measurements of the carbon emission projects in China from 2014 to 2022, and verify the B-spline superiority by Monte Carlo simulation. The empirical results show that B-spline method has the highest risk fitting success rate and the smallest error.
Collapse
|
5
|
Right-censored partially linear regression model with error in variables: application with carotid endarterectomy dataset. Int J Biostat 2023; 0:ijb-2022-0044. [PMID: 37257507 DOI: 10.1515/ijb-2022-0044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 05/11/2023] [Indexed: 06/02/2023]
Abstract
This paper considers a partially linear regression model relating a right-censored response variable to predictors and an extra covariate with measured error. The main problem here is that censorship and measurement error problems need to be solved to estimate the model correctly. In this sense, we propose three modified semiparametric estimators obtained from local polynomial regression, kernel smoothing, and B-spline smoothing methods based on kernel deconvolution approach and synthetic data transformation. Here, kernel deconvolution technique is used to solve the measurement error problem in the model and synthetic data transformation is considered to add the effect of censorship to the estimation procedure, which is a very common method in the literature. The performances of the introduced estimators are compared in the detailed Monte-Carlo simulation study. In addition, Carotid endarterectomy data is used as real-world data example and results are presented. According to the results, it is seen that the deconvoluted local polynomial method gives more qualified estimates than other two methods.
Collapse
|
6
|
Navigation with Polytopes: A Toolbox for Optimal Path Planning with Polytope Maps and B-spline Curves. SENSORS (BASEL, SWITZERLAND) 2023; 23:3532. [PMID: 37050593 PMCID: PMC10099157 DOI: 10.3390/s23073532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 03/12/2023] [Accepted: 03/23/2023] [Indexed: 06/19/2023]
Abstract
To deal with the problem of optimal path planning in 2D space, this paper introduces a new toolbox named "Navigation with Polytopes" and explains the algorithms behind it. The toolbox allows one to create a polytopic map from a standard grid map, search for an optimal corridor, and plan a safe B-spline reference path used for mobile robot navigation. Specifically, the B-spline path is converted into its equivalent Bézier representation via a novel calculation method in order to reduce the conservativeness of the constrained path planning problem. The conversion can handle the differences between the curve intervals and allows for efficient computation. Furthermore, two different constraint formulations used for enforcing a B-spline path to stay within the sequence of connected polytopes are proposed, one with a guaranteed solution. The toolbox was extensively validated through simulations and experiments.
Collapse
|
7
|
Orthogonal cubic splines for the numerical solution of nonlinear parabolic partial differential equations. MethodsX 2023; 10:102190. [PMID: 37168771 PMCID: PMC10165129 DOI: 10.1016/j.mex.2023.102190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 04/15/2023] [Indexed: 05/13/2023] Open
Abstract
In this paper, a new orthogonal basis for the space of cubic splines has been introduced. A linear combination of cubic orthogonal splines is considered to approximate the functions in which the coefficients are calculated with numerically stable formulae. Applications to the numerical solutions of some parabolic partial differential equations are given, in which the approximations are obtained using the first and second integral of orthogonal splines which leads to an efficient solution procedure. The convergence analysis in the approximate scheme is investigated. A comparison of the obtained numerical solutions with some other papers indicates that the presented method is reliable and yields result with good accuracy. The main parts of our study are as follows:•We propose a robust approach based on the orthogonal cubic splines procedure in conjunction with the operational matrix.•The convergence in the approximate scheme is analyzed.•Numerical examples show that the proposed method is very accurate.
Collapse
|
8
|
Methodology for Quantifying Volatile Compounds in a Liquid Mixture Using an Algorithm Combining B-Splines and Artificial Neural Networks to Process Responses of a Thermally Modulated Metal-Oxide Semiconductor Gas Sensor. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22228959. [PMID: 36433555 PMCID: PMC9697949 DOI: 10.3390/s22228959] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/13/2022] [Accepted: 11/17/2022] [Indexed: 06/01/2023]
Abstract
Metal oxide semiconductor (MOS) gas sensors have many advantages, but the main obstacle to their widespread use is the cross-sensitivity observed when using this type of detector to analyze gas mixtures. Thermal modulation of the heater integrated with a MOS gas sensor reduced this problem and is a promising solution for applications requiring the selective detection of volatile compounds. Nevertheless, the interpretation of the sensor output signals, which take the form of complex, unique patterns, is difficult and requires advanced signal processing techniques. The study focuses on the development of a methodology to measure and process the output signal of a thermally modulated MOS gas sensor based on a B-spline curve and artificial neural networks (ANNs), which enable the quantitative analysis of volatile components (ethanol and acetone) coexisting in mixtures. B-spline approximation applied in the first stage allowed for the extraction of relevant information from the gas sensor output voltage and reduced the size of the measurement dataset while maintaining the most vital features contained in it. Then, the determined parameters of the curve were used as the input vector for the ANN model based on the multilayer perceptron structure. The results show great usefulness of the combination of B-spline and ANN modeling techniques to improve response selectivity of a thermally modulated MOS gas sensor.
Collapse
|
9
|
Asymptotic behavior of an intrinsic rank-based estimator of the Pickands dependence function constructed from B-splines. EXTREMES 2022; 26:101-138. [PMID: 36751468 PMCID: PMC9898389 DOI: 10.1007/s10687-022-00451-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 09/14/2022] [Accepted: 09/15/2022] [Indexed: 06/18/2023]
Abstract
A bivariate extreme-value copula is characterized by its Pickands dependence function, i.e., a convex function defined on the unit interval satisfying boundary conditions. This paper investigates the large-sample behavior of a nonparametric estimator of this function due to Cormier et al. (Extremes 17:633-659, 2014). These authors showed how to construct this estimator through constrained quadratic median B-spline smoothing of pairs of pseudo-observations derived from a random sample. Their estimator is shown here to exist whatever the order m ≥ 3 of the B-spline basis, and its consistency is established under minimal conditions. The large-sample distribution of this estimator is also determined under the additional assumption that the underlying Pickands dependence function is a B-spline of given order with a known set of knots.
Collapse
|
10
|
Semiparametric single-index models for optimal treatment regimens with censored outcomes. LIFETIME DATA ANALYSIS 2022; 28:744-763. [PMID: 35939142 DOI: 10.1007/s10985-022-09566-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 06/07/2022] [Indexed: 06/15/2023]
Abstract
There is a growing interest in precision medicine, where a potentially censored survival time is often the most important outcome of interest. To discover optimal treatment regimens for such an outcome, we propose a semiparametric proportional hazards model by incorporating the interaction between treatment and a single index of covariates through an unknown monotone link function. This model is flexible enough to allow non-linear treatment-covariate interactions and yet provides a clinically interpretable linear rule for treatment decision. We propose a sieve maximum likelihood estimation approach, under which the baseline hazard function is estimated nonparametrically and the unknown link function is estimated via monotone quadratic B-splines. We show that the resulting estimators are consistent and asymptotically normal with a covariance matrix that attains the semiparametric efficiency bound. The optimal treatment rule follows naturally as a linear combination of the maximum likelihood estimators of the model parameters. Through extensive simulation studies and an application to an AIDS clinical trial, we demonstrate that the treatment rule derived from the single-index model outperforms the treatment rule under the standard Cox proportional hazards model.
Collapse
|
11
|
Fitting random regression models with Legendre polynomial and B-spline to model the lactation curve for Indian dairy goat of semi-arid tropic. J Anim Breed Genet 2022; 139:414-422. [PMID: 35404489 DOI: 10.1111/jbg.12678] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 03/07/2022] [Accepted: 03/24/2022] [Indexed: 11/30/2022]
Abstract
The present investigation aimed at genetic evaluation of tropical Indian dairy Jamunapari goat using random regression models (RRM) for the estimation of genetic parameters in the first three lactations across test days (TD) and also to come out with a pragmatic breeding plan in the nucleus. Variations in the lactation curves were modelled using 67,172 TD milk yield (TDMY) records. To obtain adequate and parsimonious models for the estimation of genetic parameters, orthogonal Legendre Polynomials (LP) and B-splines (BS) were compared. The analysis was carried out using a single-trait RRM approach. Average TDMY was 0.72, 0.81 and 0.79 kg in 1st to 3rd parities that also had 4th TD peak yield in common. BS function resulted in robust genetic parameters and a smoother curve for lactation as compared to LP. Maternal effects were evaluated and then dropped from the final model, owing to no significant contribution to the genetic variance. The best RRM was a quadratic BS function with six knots for the mean trend, curves of additive genetic, animal permanent environmental (c2 ) and 22 classes of residual variance. Additive variances and heritability (h2 ) estimates were higher in the early lactation. For first parity, the estimates of h2 varied between 0.19 to 0.35 across TD. Moderate h2 estimate suggests further scope for selection using desirable combinations of TD over the lactation. We observed a very high variance due to c2 across TD in three lactations. Genetic correlations were positive and larger between adjacent TDMY and weakened for distant TDMY. Looking into the robust estimates of genetic parameters and better fitting of lactation curve, we suggest the use of B-spline function for regular genetic evaluation of Jamunapari goat.
Collapse
|
12
|
Scalable proximal methods for cause-specific hazard modeling with time-varying coefficients. LIFETIME DATA ANALYSIS 2022; 28:194-218. [PMID: 35092553 PMCID: PMC9201734 DOI: 10.1007/s10985-021-09544-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 12/25/2021] [Indexed: 06/14/2023]
Abstract
Survival modeling with time-varying coefficients has proven useful in analyzing time-to-event data with one or more distinct failure types. When studying the cause-specific etiology of breast and prostate cancers using the large-scale data from the Surveillance, Epidemiology, and End Results (SEER) Program, we encountered two major challenges that existing methods for estimating time-varying coefficients cannot tackle. First, these methods, dependent on expanding the original data in a repeated measurement format, result in formidable time and memory consumption as the sample size escalates to over one million. In this case, even a well-configured workstation cannot accommodate their implementations. Second, when the large-scale data under analysis include binary predictors with near-zero variance (e.g., only 0.6% of patients in our SEER prostate cancer data had tumors regional to the lymph nodes), existing methods suffer from numerical instability due to ill-conditioned second-order information. The estimation accuracy deteriorates further with multiple competing risks. To address these issues, we propose a proximal Newton algorithm with a shared-memory parallelization scheme and tests of significance and nonproportionality for the time-varying effects. A simulation study shows that our scalable approach reduces the time and memory costs by orders of magnitude and enjoys improved estimation accuracy compared with alternative approaches. Applications to the SEER cancer data demonstrate the real-world performance of the proximal Newton algorithm.
Collapse
|
13
|
A semi-parametric Bayesian model for semi-continuous longitudinal data. Stat Med 2022; 41:2354-2374. [PMID: 35274335 DOI: 10.1002/sim.9359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 01/21/2022] [Accepted: 02/03/2022] [Indexed: 11/11/2022]
Abstract
Semi-continuous data present challenges in both model fitting and interpretation. Parametric distributions may be inappropriate for extreme long right tails of the data. Mean effects of covariates, susceptible to extreme values, may fail to capture relevant information for most of the sample. We propose a two-component semi-parametric Bayesian mixture model, with the discrete component captured by a probability mass (typically at zero) and the continuous component of the density modeled by a mixture of B-spline densities that can be flexibly fit to any data distribution. The model includes random effects of subjects to allow for application to longitudinal data. We specify prior distributions on parameters and perform model inference using a Markov chain Monte Carlo (MCMC) Gibbs-sampling algorithm programmed in R. Statistical inference can be made for multiple quantiles of the covariate effects simultaneously providing a comprehensive view. Various MCMC sampling techniques are used to facilitate convergence. We demonstrate the performance and the interpretability of the model via simulations and analyses on the National Consortium on Alcohol and Neurodevelopment in Adolescence study (NCANDA) data on alcohol binge drinking.
Collapse
|
14
|
A hierarchical approach for rigid-body dynamics model simplification of a high-speed parallel robot by considering kinematics performance. Sci Prog 2021; 104:368504211063072. [PMID: 34903104 PMCID: PMC10358639 DOI: 10.1177/00368504211063072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Considering the real-time control of a high-speed parallel robot, a concise and precise dynamics model is essential for the design of the dynamics controller. However, the complete rigid-body dynamics model of parallel robots is too complex for online calculation. Therefore, a hierarchical approach for dynamics model simplification, which considers the kinematics performance, is proposed in this paper. Firstly, considering the motion smoothness of the end-effector, trajectory planning based on the workspace discretization is carried out. Then, the effects of the trajectory parameters and acceleration types on the trajectory planning are discussed. But for the fifth-order and seventh-order B-spline acceleration types, the trajectory will generate excessive deformation after trajectory planning. Therefore, a comprehensive index that considers both the motion smoothness and trajectory deformation is proposed. Finally, the dynamics model simplification method based on the combined mass distribution coefficients is studied. Results show that the hierarchical approach can guarantee both the excellent kinematics performance of the parallel robot and the accuracy of the simplified dynamics model under different trajectory parameters and acceleration types. Meanwhile, the method proposed in the paper can be applied to the design of the dynamics controller to enhance the robot's performance.
Collapse
|
15
|
Representation of a Monotone Curve by a Contour with Regular Change in Curvature. ENTROPY 2021; 23:e23070923. [PMID: 34356464 PMCID: PMC8305980 DOI: 10.3390/e23070923] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 07/17/2021] [Accepted: 07/19/2021] [Indexed: 11/16/2022]
Abstract
The problem of modelling a smooth contour with a regular change in curvature representing a monotone curve with specified accuracy is solved in this article. The contour was formed within the area of the possible location of a convex curve, which can interpolate a point series. The assumption that if a sequence of points can be interpolated by a monotone curve, then the reference curve on which these points have been assigned is monotone, provides the opportunity to implement the proposed approach to estimate the interpolation error of a point series of arbitrary configuration. The proposed methods for forming a convex regular contour by arcs of ellipses and B-spline ensure the interpolation of any point series in parts that can be interpolated by a monotone curve. At the same time, the deflection of the contour from the boundaries of the area of the possible location of the monotone curve can be controlled. The possibilities of the developed methods are tested while solving problems of the interpolation of a point series belonging to monotone curves. The problems are solved in the CAD system of SolidWorks with the use of software application created based on the methods developed in the research work.
Collapse
|
16
|
A Real-Time Collision Avoidance Framework of MASS Based on B-Spline and Optimal Decoupling Control. SENSORS 2021; 21:s21144911. [PMID: 34300648 PMCID: PMC8309775 DOI: 10.3390/s21144911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/09/2021] [Accepted: 07/14/2021] [Indexed: 11/17/2022]
Abstract
Real-time collision-avoidance navigation of autonomous ships is required by many application scenarios, such as carriage of goods by sea, search, and rescue. The collision avoidance algorithm is the core of autonomous navigation for Maritime autonomous surface ships (MASS). In order to realize real-time and free-collision under the condition of multi-ship encounter in an uncertain environment, a real-time collision avoidance framework is proposed using B-spline and optimal decoupling control. This framework takes advantage to handle the uncertain environment with limited sensing MASS which plans dynamically feasible, highly reliable, and safe feasible collision avoidance. First, owing to the collision risk assessment, a B-spline-based collision avoidance trajectory search (BCATS) algorithm is proposed to generate free-collision trajectories effectively. Second, a waypoint-based collision avoidance trajectory optimization is proposed with the path-speed decoupling control. Two benefits, a reduction of control cost and an improvement in the smoothness of the collision avoidance trajectory, are delivered. Finally, we conducted an experiment using the Electronic Chart System (ECS). The results reveal the robustness and real-time collision avoidance trajectory planned by the proposed collision avoidance system.
Collapse
|
17
|
Geometrical Consistency Modeling on B-Spline Parameter Domain for 3D Face Reconstruction From Limited Number of Wild Images. Front Neurorobot 2021; 15:652562. [PMID: 33935676 PMCID: PMC8079323 DOI: 10.3389/fnbot.2021.652562] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 03/15/2021] [Indexed: 11/13/2022] Open
Abstract
A number of methods have been proposed for face reconstruction from single/multiple image(s). However, it is still a challenge to do reconstruction for limited number of wild images, in which there exists complex different imaging conditions, various face appearance, and limited number of high-quality images. And most current mesh model based methods cannot generate high-quality face model because of the local mapping deviation in geometric optics and distortion error brought by discrete differential operation. In this paper, accurate geometrical consistency modeling on B-spline parameter domain is proposed to reconstruct high-quality face surface from the various images. The modeling is completely consistent with the law of geometric optics, and B-spline reduces the distortion during surface deformation. In our method, 0th- and 1st-order consistency of stereo are formulated based on low-rank texture structures and local normals, respectively, to approach the pinpoint geometric modeling for face reconstruction. A practical solution combining the two consistency as well as an iterative algorithm is proposed to optimize high-detailed B-spline face effectively. Extensive empirical evaluations on synthetic data and unconstrained data are conducted, and the experimental results demonstrate the effectiveness of our method on challenging scenario, e.g., limited number of images with different head poses, illuminations, and expressions.
Collapse
|
18
|
Spatiotemporal Free-Form Registration Method Assisted by a Minimum Spanning Tree During Discontinuous Transformations. J Digit Imaging 2021; 34:190-203. [PMID: 33483863 DOI: 10.1007/s10278-020-00409-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 11/02/2020] [Accepted: 11/20/2020] [Indexed: 10/22/2022] Open
Abstract
The sliding motion along the boundaries of discontinuous regions has been actively studied in B-spline free-form deformation framework. This study focusses on the sliding motion for a velocity field-based 3D+t registration. The discontinuity of the tangent direction guides the deformation of the object region, and a separate control of two regions provides a better registration accuracy. The sliding motion under the velocity field-based transformation is conducted under the [Formula: see text]-Rényi entropy estimator using a minimum spanning tree (MST) topology. Moreover, a new topology changing method of the MST is proposed. The topology change is performed as follows: inserting random noise, constructing the MST, and removing random noise while preserving a local connection consistency of the MST. This random noise process (RNP) prevents the [Formula: see text]-Rényi entropy-based registration from degrading in sliding motion, because the RNP creates a small disturbance around special locations. Experiments were performed using two publicly available datasets: the DIR-Lab dataset, which consists of 4D pulmonary computed tomography (CT) images, and a benchmarking framework dataset for cardiac 3D ultrasound. For the 4D pulmonary CT images, RNP produced a significantly improved result for the original MST with sliding motion (p<0.05). For the cardiac 3D ultrasound dataset, only a discontinuity-based registration indicated activity of the RNP. In contrast, the single MST without sliding motion did not show any improvement. These experiments proved the effectiveness of the RNP for sliding motion.
Collapse
|
19
|
Joint analysis of longitudinal measurements and survival times with a cure fraction based on partly linear mixed and semiparametric cure models. Pharm Stat 2020; 20:362-374. [PMID: 33225606 DOI: 10.1002/pst.2082] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 09/17/2020] [Accepted: 11/07/2020] [Indexed: 11/05/2022]
Abstract
In a joint analysis of longitudinal quality of life (QoL) scores and relapse-free survival (RFS) times from a clinical trial on early breast cancer conducted by the Canadian Cancer Trials Group, we observed a complicated trajectory of QoL scores and existence of long-term survivors. Motivated by this observation, we proposed in this paper a flexible joint model for the longitudinal measurements and survival times. A partly linear mixed effect model is used to capture the complicated but smooth trajectory of longitudinal measurements and approximated by B-splines and a semiparametric mixture cure model with the B-spline baseline hazard to model survival times with a cure fraction. These two models are linked by shared random effects to explore the dependence between longitudinal measurements and survival times. A semiparametric inference procedure with an EM algorithm is proposed to estimate the parameters in the joint model. The performance of proposed procedures are evaluated by simulation studies and through the application to the analysis of data from the clinical trial which motivated this research.
Collapse
|
20
|
Pattern discovery of health curves using an ordered probit model with Bayesian smoothing and functional principal component analysis. Stat Methods Med Res 2020; 30:458-472. [PMID: 32976070 DOI: 10.1177/0962280220951834] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
This article is motivated by the need for discovering patterns of patients' health based on their daily settings of care to aid the health policy-makers to improve the effectiveness of distributing funding for health services. The hidden process of one's health status is assumed to be a continuous smooth function, called the health curve, ranging from perfectly healthy to dead. The health curves are linked to the categorical setting of care using an ordered probit model and are inferred through Bayesian smoothing. The challenges include the nontrivial constraints on the lower bound of the health status (death) and on the model parameters to ensure model identifiability. We use the Markov chain Monte Carlo method to estimate the parameters and health curves. The functional principal component analysis is applied to the patients' estimated health curves to discover common health patterns. The proposed method is demonstrated through an application to patients hospitalized from strokes in Ontario. Whilst this paper focuses on the method's application to a health care problem, the proposed model and its implementation have the potential to be applied to many application domains in which the response variable is ordinal and there is a hidden process. Our implementation is available at https://github.com/liangliangwangsfu/healthCurveCode.
Collapse
|
21
|
Estimating model-based nonnegative population marginal means in application to medical expenditures covered by different health care policies - A study on Medical Expenditure Panel Survey. Stat Methods Med Res 2020; 30:299-315. [PMID: 32907489 DOI: 10.1177/0962280220954241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The medical care expenditure is historically an important public health issue, which greatly impacts the government's health policies as well as patients' financial and medical decisions. In population health research, we commonly discretize a numeric attribute to a few ordinal groups to examine population characteristics. Oftentimes, the population marginal mean estimation by the ANOVA approach is inflexible since it uses pre-defined grouping of the covariate. In this paper, we propose a method to estimate the population marginal mean using the B-spline-based regression in a manner of a generalized additive model as an alternative for the ANOVA. Since the medical expenditure is always nonnegative, a Bayesian approach is also implemented for the nonnegative constraint on the marginal mean estimates. The proposed method is flexible to estimate marginal means for user-specified grouping after model fitting in a post-hoc manner, a clear advantage over the ANOVA approach. We show that this method is inferentially superior to the ANOVA through theoretical investigations and an extensive Monte Carlo study. The real data analysis using Medical Expenditure Panel Survey data assisted by some visualization tools demonstrates an applicability of the proposed approach and leads us some interesting observations that may be relevant to public health discussions.
Collapse
|
22
|
Surface-based modeling of muscles: Functional simulation of the shoulder. Med Eng Phys 2020; 82:1-12. [PMID: 32709260 DOI: 10.1016/j.medengphy.2020.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 04/03/2020] [Accepted: 04/30/2020] [Indexed: 12/25/2022]
Abstract
Musculoskeletal simulations are an essential tool for studying functional implications of pathologies and of potential surgical outcomes, e.g., for the complex shoulder anatomy. Most shoulder models rely on line-segment approximation of muscles with potential limitations. Comprehensive shoulder models based on continuum-mechanics are scarce due to their complexity in both modeling and computation. In this paper, we present a surface-based modeling approach for muscles, which simplifies the modeling process and is efficient for computation. We propose to use surface geometries for modeling muscles, and devise an automatic approach to generate such models, given the locations of the origin and insertion of tendons. The surfaces are expressed as higher-order tensor B-splines, which ensure smoothness of the geometrical representation. They are simulated as membrane elements within a finite element simulation. This is demonstrated on a comprehensive model of the upper limb, where muscle activations needed to perform desired motions are obtained by using inverse dynamics. In synthetic examples, we demonstrate our proposed surface elements both to be easy to customize (e.g., with spatially varying material properties) and to be substantially (up to 12 times) faster in simulation compared to their volumetric counterpart. With our presented automatic approach of muscle wrapping around bones, the humeral head is exemplified to be wrapped physiologically consistently with surface elements. Our functional simulation is shown to successfully replicate a tracked shoulder motion during activities of daily living. We demonstrate surface-based models to be a numerically stable and computationally efficient compromise between line-segment and volumetric models, enabling anatomical correctness, subject-specific customization, and fast simulations, for a comprehensive simulation of musculoskeletal motion.
Collapse
|
23
|
Abstract
Many clinical studies collect longitudinal and survival data concurrently. Joint models combining these two types of outcomes through shared random effects are frequently used in practical data analysis. The standard joint models assume that the coefficients for the longitudinal and survival components are time-invariant. In many applications, the assumption is overly restrictive. In this research, we extend the standard joint model to include time-varying coefficients, in both longitudinal and survival components, and we present a data-driven method for variable selection. Specifically, we use a B-spline decomposition and penalized likelihood with adaptive group LASSO to select the relevant independent variables and to distinguish the time-varying and time-invariant effects for the two model components. We use Gaussian-Legendre and Gaussian-Hermite quadratures to approximate the integrals in the absence of closed-form solutions. Simulation studies show good selection and estimation performance. Finally, we use the proposed procedure to analyze data generated by a study of primary biliary cirrhosis.
Collapse
|
24
|
Intensity-curvature functional based digital high pass filter of the bivariate cubic B-spline model polynomial function. Vis Comput Ind Biomed Art 2019; 2:9. [PMID: 32240391 PMCID: PMC7099544 DOI: 10.1186/s42492-019-0017-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Accepted: 06/26/2019] [Indexed: 11/10/2022] Open
Abstract
This research addresses the design of intensity-curvature functional (ICF) based digital high pass filter (HPF). ICF is calculated from bivariate cubic B-spline model polynomial function and is called ICF-based HPF. In order to calculate ICF, the model function needs to be second order differentiable and to have non-null classic-curvature calculated at the origin (0, 0) of the pixel coordinate system. The theoretical basis of this research is called intensity-curvature concept. The concept envisions to replace signal intensity with the product between signal intensity and sum of second order partial derivatives of the model function. Extrapolation of the concept in two-dimensions (2D) makes it possible to calculate the ICF of an image. Theoretical treatise is presented to demonstrate the hypothesis that ICF is HPF signal. Empirical evidence then validates the assumption and also extends the comparison between ICF-based HPF and ten different HPFs among which is traditional HPF and particle swarm optimization (PSO) based HPF. Through comparison of image space and k-space magnitude, results indicate that HPFs behave differently. Traditional HPF filtering and ICF-based filtering are superior to PSO-based filtering. Images filtered with traditional HPF are sharper than images filtered with ICF-based filter. The contribution of this research can be summarized as follows: (1) Math description of the constraints that ICF need to obey to in order to function as HPF; (2) Math of ICF-based HPF of bivariate cubic B-spline; (3) Image space comparisons between HPFs; (4) K-space magnitude comparisons between HPFs. This research provides confirmation on the math procedure to use in order to design 2D HPF from a model bivariate polynomial function.
Collapse
|
25
|
Dynamic path planning and trajectory tracking using MPC for satellite with collision avoidance. ISA TRANSACTIONS 2019; 84:128-141. [PMID: 30316573 DOI: 10.1016/j.isatra.2018.09.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2018] [Revised: 07/26/2018] [Accepted: 09/21/2018] [Indexed: 06/08/2023]
Abstract
This paper proposes a dynamic path planning and trajectory tracking algorithm for an autonomous satellite, released from the space station, to get to the desired position for performing space tasks. The complex construction of the space station results in the presence of a geometric channel constraint for the obstacles avoidance. In addition, a three dimension B-spline template with minimizing the curvature of the path is designed, which could guarantee the continuity of the curvature to make the trajectory smooth and avoid the satellite from stopping at discontinuities waypoints. Then, the reference states and inputs are solved by a new projection method, which provides a foundation for the subsequent trajectory tracking. Subsequently, a finite horizon model predictive control method is constructed for the path tracking. The benefits of this approach are to take constraints into consideration, and to get optimal performance by minimizing the fuel consumption compared with other tracking controllers. The closed-loop stability is guaranteed by the feedback controller, terminal penalty, and a newly terminal constraint set. In simulation experiments, results illustrate the effectiveness and practicality of the algorithm.
Collapse
|
26
|
The influence of the image registration method on the adaptive radiotherapy. A proof of the principle in a selected case of prostate IMRT. Phys Med 2018; 45:93-98. [PMID: 29472097 DOI: 10.1016/j.ejmp.2017.12.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Revised: 11/19/2017] [Accepted: 12/04/2017] [Indexed: 12/25/2022] Open
Abstract
PURPOSE To analyse the influence of the image registration method on the adaptive radiotherapy of an IMRT prostate treatment, and to compare the dose accumulation according to 3 different image registration methods with the planned dose. MATERIAL AND METHODS The IMRT prostate patient was CT imaged 3 times throughout his treatment. The prostate, PTV, rectum and bladder were segmented on each CT. A Rigid, a deformable (DIR) B-spline and a DIR with landmarks registration algorithms were employed. The difference between the accumulated doses and planned doses were evaluated by the gamma index. The Dice coefficient and Hausdorff distance was used to evaluate the overlap between volumes, to quantify the quality of the registration. RESULTS When comparing adaptive vs no adaptive RT, the gamma index calculation showed large differences depending on the image registration method (as much as 87.6% in the case of DIR B-spline). The quality of the registration was evaluated using an index such as the Dice coefficient. This showed that the best result was obtained with DIR with landmarks compared with the rest and it was always above 0.77, reported as a recommended minimum value for prostate studies in a multi-centre review. CONCLUSIONS Apart from showing the importance of the application of an adaptive RT protocol in a particular treatment, this work shows that the election of the registration method is decisive in the result of the adaptive radiotherapy and dose accumulation.
Collapse
|
27
|
Semiparametric partially linear varying coefficient models with panel count data. LIFETIME DATA ANALYSIS 2017; 23:439-466. [PMID: 27118299 DOI: 10.1007/s10985-016-9368-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Accepted: 04/15/2016] [Indexed: 06/05/2023]
Abstract
This paper studies semiparametric regression analysis of panel count data, which arise naturally when recurrent events are considered. Such data frequently occur in medical follow-up studies and reliability experiments, for example. To explore the nonlinear interactions between covariates, we propose a class of partially linear models with possibly varying coefficients for the mean function of the counting processes with panel count data. The functional coefficients are estimated by B-spline function approximations. The estimation procedures are based on maximum pseudo-likelihood and likelihood approaches and they are easy to implement. The asymptotic properties of the resulting estimators are established, and their finite-sample performance is assessed by Monte Carlo simulation studies. We also demonstrate the value of the proposed method by the analysis of a cancer data set, where the new modeling approach provides more comprehensive information than the usual proportional mean model.
Collapse
|
28
|
A novel method of constructing compactly supported orthogonal scaling functions from splines. JOURNAL OF INEQUALITIES AND APPLICATIONS 2017; 2017:155. [PMID: 28725132 PMCID: PMC5491697 DOI: 10.1186/s13660-017-1425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Accepted: 06/13/2017] [Indexed: 06/07/2023]
Abstract
A novel construction of compactly supported orthogonal scaling functions and wavelets with spline functions is presented in this paper. Let [Formula: see text] be the center B-spline of order n, except for the case of order one, we know [Formula: see text] is not orthogonal. But by the formula of orthonormalization procedure, we can construct an orthogonal scaling function corresponding to [Formula: see text]. However, unlike [Formula: see text] itself, this scaling function no longer has compact support. To induce the orthogonality while keeping the compact support of [Formula: see text], we put forward a simple, yet efficient construction method that uses the formula of orthonormalization procedure and the weighted average method to construct the two-scale symbol of some compactly supported orthogonal scaling functions.
Collapse
|
29
|
Abstract
Motivated by a genetic investigation on the progressive decline in renal function in a clinical trial study of kidney disease, we develop a practical test for evaluating the group difference in trajectories under a semi-parametric modeling framework. For the temporal patterns or trajectories of longitudinal data, B-splines are used to approximate the function non-parametrically. Such approximation asymptotically converts the problem of testing trajectory difference into the significance test of regression coefficients that can be simply estimated by generalized estimating equations. To select the optimal number of inner knots for B-splines, a cross-validation procedure is performed using the criterion of the generalized residual sum of squares. The new proposed test successfully detects a significant difference of underlying genetic impact on the progression of renal disease, which is not captured by the parametric approach.
Collapse
|
30
|
Automatic Structure Discovery for Varying-coefficient Partially Linear Models. COMMUN STAT-THEOR M 2017; 46:7703-7716. [PMID: 31168223 PMCID: PMC6546297 DOI: 10.1080/03610926.2016.1161796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2015] [Accepted: 02/29/2016] [Indexed: 10/21/2022]
Abstract
Varying-coefficient partially linear models provide a useful tools for modeling of covariate effects on the response variable in regression. One key question in varying-coefficient partially linear models is the choice of model structure, that is, how to decide which covariates have linear effect and which have nonlinear effect. In this article, we propose a profile method for identifying the covariates with linear effect or nonlinear effect. Our proposed method is a penalized regression approach based on group minimax concave penalty. Under suitable conditions, we show that the proposed method can correctly determine which covariates have a linear effect and which do not with high probability. The convergence rate of the linear estimator is established as well as the asymptotical normality. The performance of the proposed method is evaluated through a simulation study which supports our theoretical results.
Collapse
|
31
|
A time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes with applications in substance abuse research. Stat Med 2017; 36:827-837. [PMID: 27873343 DOI: 10.1002/sim.7177] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2016] [Revised: 10/17/2016] [Accepted: 10/28/2016] [Indexed: 11/06/2022]
Abstract
This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
|
32
|
A quantitative evaluation of pleural effusion on computed tomography scans using B-spline and local clustering level set. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2017; 25:887-905. [PMID: 28550270 DOI: 10.3233/xst-17264] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.
Collapse
|
33
|
Lane changing trajectory planning and tracking control for intelligent vehicle on curved road. SPRINGERPLUS 2016; 5:1150. [PMID: 27504248 PMCID: PMC4956640 DOI: 10.1186/s40064-016-2806-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2016] [Accepted: 07/11/2016] [Indexed: 12/03/2022]
Abstract
This paper explores lane changing trajectory planning and tracking control for intelligent vehicle on curved road. A novel arcs trajectory is planned for the desired lane changing trajectory. A kinematic controller and a dynamics controller are designed to implement the trajectory tracking control. Firstly, the kinematic model and dynamics model of intelligent vehicle with non-holonomic constraint are established. Secondly, two constraints of lane changing on curved road in practice (LCCP) are proposed. Thirdly, two arcs with same curvature are constructed for the desired lane changing trajectory. According to the geometrical characteristics of arcs trajectory, equations of desired state can be calculated. Finally, the backstepping method is employed to design a kinematic trajectory tracking controller. Then the sliding-mode dynamics controller is designed to ensure that the motion of the intelligent vehicle can follow the desired velocity generated by kinematic controller. The stability of control system is proved by Lyapunov theory. Computer simulation demonstrates that the desired arcs trajectory and state curves with B-spline optimization can meet the requirements of LCCP constraints and the proposed control schemes can make tracking errors to converge uniformly.
Collapse
|
34
|
Partial linear varying multi-index coefficient model for integrative gene-environment interactions. Stat Sin 2016; 26:1037-1060. [PMID: 27667907 PMCID: PMC5033130 DOI: 10.5705/ss.202015.0114] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Gene-environment (G×E) interactions play key roles in many complex diseases. An increasing number of epidemiological studies have shown the combined effect of multiple environmental exposures on disease risk. However, no appropriate statistical models have been developed to conduct a rigorous assessment of such combined effects when G×E interactions are considered. In this paper, we propose a partial linear varying multi-index coefficient model (PLVMICM) to assess how multiple environmental factors act jointly to modify individual genetic risk on complex disease. Our model includes the varying-index coefficient model as a special case, where discrete variables are admitted as the linear part. Thus PLVMICM allows one to study nonlinear interaction effects between genes and continuous environments as well as linear interactions between genes and discrete environments, simultaneously. We derive a profile method to estimate parametric parameters and a B-spline backfitted kernel method to estimate nonlinear interaction functions. Consistency and asymptotic normality of the parametric and nonparametric estimates are established under some regularity conditions. Hypothesis testing for the parametric coefficients and nonparametric functions are conducted. Results show that the statistics for testing the parametric coefficients and the non-parametric functions asymptotically follow a χ2-distribution with different degrees of freedom. The utility of the method is demonstrated through extensive simulations and a case study.
Collapse
|
35
|
Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets. Med Image Anal 2016; 32:157-72. [PMID: 27104582 PMCID: PMC5105836 DOI: 10.1016/j.media.2016.03.007] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2015] [Revised: 02/03/2016] [Accepted: 03/23/2016] [Indexed: 11/16/2022]
Abstract
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.
Collapse
|
36
|
Fully automatic reconstruction of personalized 3D volumes of the proximal femur from 2D X-ray images. Int J Comput Assist Radiol Surg 2016; 11:1673-85. [PMID: 27038965 DOI: 10.1007/s11548-016-1400-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Accepted: 03/21/2016] [Indexed: 11/27/2022]
Abstract
PURPOSE Accurate preoperative planning is crucial for the outcome of total hip arthroplasty. Recently, 2D pelvic X-ray radiographs have been replaced by 3D CT. However, CT suffers from relatively high radiation dosage and cost. An alternative is to reconstruct a 3D patient-specific volume data from 2D X-ray images. METHODS In this paper, based on a fully automatic image segmentation algorithm, we propose a new control point-based 2D-3D registration approach for a deformable registration of a 3D volumetric template to a limited number of 2D calibrated X-ray images and show its application to personalized reconstruction of 3D volumes of the proximal femur. The 2D-3D registration is done with a hierarchical two-stage strategy: the scaled-rigid 2D-3D registration stage followed by a regularized deformable B-spline 2D-3D registration stage. In both stages, a set of control points with uniform spacing are placed over the domain of the 3D volumetric template first. The registration is then driven by computing updated positions of these control points with intensity-based 2D-2D image registrations of the input X-ray images with the associated digitally reconstructed radiographs, which allows computing the associated registration transformation at each stage. RESULTS Evaluated on datasets of 44 patients, our method achieved an overall surface reconstruction accuracy of [Formula: see text] and an average Dice coefficient of [Formula: see text]. We further investigated the cortical bone region reconstruction accuracy, which is important for planning cementless total hip arthroplasty. An average cortical bone region Dice coefficient of [Formula: see text] and an inner cortical bone surface reconstruction accuracy of [Formula: see text] were found. CONCLUSIONS In summary, we developed a new approach for reconstruction of 3D personalized volumes of the proximal femur from 2D X-ray images. Comprehensive experiments demonstrated the efficacy of the present approach.
Collapse
|
37
|
Abstract
This study proposes a time-varying effect model that can be used to characterize gender-specific trajectories of health behaviors and conduct hypothesis testing for gender differences. The motivating examples demonstrate that the proposed model is applicable to not only multi-wave longitudinal studies but also short-term studies that involve intensive data collection. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size and the number of time points increase. In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all combinations of sample size and number of time points. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size and the number of time points are larger.
Collapse
|
38
|
FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX. Ann Stat 2015; 43:1929-1958. [PMID: 26283801 DOI: 10.1214/15-aos1330] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.
Collapse
|
39
|
Abstract
Dropout is a common problem in longitudinal cohort studies and clinical trials, often raising concerns of nonignorable dropout. Selection, frailty, and mixture models have been proposed to account for potentially nonignorable missingness by relating the longitudinal outcome to time of dropout. In addition, many longitudinal studies encounter multiple types of missing data or reasons for dropout, such as loss to follow-up, disease progression, treatment modifications and death. When clinically distinct dropout reasons are present, it may be preferable to control for both dropout reason and time to gain additional clinical insights. This may be especially interesting when the dropout reason and dropout times differ by the primary exposure variable. We extend a semi-parametric varying-coefficient method for nonignorable dropout to accommodate dropout reason. We apply our method to untreated HIV-infected subjects recruited to the Acute Infection and Early Disease Research Program HIV cohort and compare longitudinal CD4+ T cell count in injection drug users to nonusers with two dropout reasons: anti-retroviral treatment initiation and loss to follow-up.
Collapse
|
40
|
Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations. Acad Radiol 2015; 22:722-33. [PMID: 25784325 DOI: 10.1016/j.acra.2015.01.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Revised: 01/18/2015] [Accepted: 01/20/2015] [Indexed: 11/23/2022]
Abstract
RATIONALE AND OBJECTIVES Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. MATERIALS AND METHODS Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. RESULTS Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). CONCLUSIONS The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice.
Collapse
|
41
|
Generalized partially linear single-index model for zero-inflated count data. Stat Med 2015; 34:876-86. [PMID: 25421596 DOI: 10.1002/sim.6382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2014] [Revised: 09/30/2014] [Accepted: 11/11/2014] [Indexed: 11/07/2022]
Abstract
Count data often arise in biomedical studies, while there could be a special feature with excessive zeros in the observed counts. The zero-inflated Poisson model provides a natural approach to accounting for the excessive zero counts. In the semiparametric framework, we propose a generalized partially linear single-index model for the mean of the Poisson component, the probability of zero, or both. We develop the estimation and inference procedure via a profile maximum likelihood method. Under some mild conditions, we establish the asymptotic properties of the profile likelihood estimators. The finite sample performance of the proposed method is demonstrated by simulation studies, and the new model is illustrated with a medical care dataset.
Collapse
|
42
|
Abstract
Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results.
Collapse
|
43
|
Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors. J Comput Graph Stat 2014; 23:1101-1125. [PMID: 25378893 DOI: 10.1080/10618600.2014.899237] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.
Collapse
|
44
|
LittleQuickWarp: an ultrafast image warping tool. Methods 2014; 73:38-42. [PMID: 25233807 DOI: 10.1016/j.ymeth.2014.09.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2014] [Revised: 08/20/2014] [Accepted: 09/06/2014] [Indexed: 10/24/2022] Open
Abstract
Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository.
Collapse
|
45
|
Time-varying effect models for ordinal responses with applications in substance abuse research. Stat Med 2014; 33:5126-37. [PMID: 25209555 DOI: 10.1002/sim.6303] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2013] [Revised: 08/12/2014] [Accepted: 08/25/2014] [Indexed: 11/11/2022]
Abstract
Ordinal responses are very common in longitudinal data collected from substance abuse research or other behavioral research. This study develops a new statistical model with free SAS macros that can be applied to characterize time-varying effects on ordinal responses. Our simulation study shows that the ordinal-scale time-varying effects model has very low estimation bias and sometimes offers considerably better performance when fitting data with ordinal responses than a model that treats the response as continuous. Contrary to a common assumption that an ordinal scale with several levels can be treated as continuous, our results indicate that it is not so much the number of levels on the ordinal scale but rather the skewness of the distribution that makes a difference on relative performance of linear versus ordinal models. We use longitudinal data from a well-known study on youth at high risk for substance abuse as a motivating example to demonstrate that the proposed model can characterize the time-varying effect of negative peer influences on alcohol use in a way that is more consistent with the developmental theory and existing literature, in comparison with the linear time-varying effect model.
Collapse
|
46
|
Identifying local co-regulation relationships in gene expression data. J Theor Biol 2014; 360:200-207. [PMID: 25042175 DOI: 10.1016/j.jtbi.2014.06.032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2013] [Accepted: 06/26/2014] [Indexed: 11/24/2022]
Abstract
Identifying interesting relationships between pairs of genes, presented over some of experimental conditions in gene expression data set, is useful for discovering novel functional gene interactions. In this paper, we introduce a new method for id entifying L ocal C o-regulation R elationships (IdLCR). These local relationships describe the behaviors of pairwise genes, which are either up- or down-regulated throughout the identified condition subset. IdLCR firstly detects the pairwise gene-gene relationships taking functional forms and the condition subsets by using a regression spline model. Then it measures the relationships using a penalized Pearson correlation and ranks the responding gene pairs by their scores. By this way, those relationships without clearly biological interpretations can be filtered out and the local co-regulation relationships can be obtained. In the simulation data sets, ten different functional relationships are embedded. Applying IdLCR to these data sets, the results show its ability to identify functional relationships and the condition subsets. For micro-array and RNA-seq gene expression data, IdLCR can identify novel biological relationships which are different from those uncovered by IFGR and MINE.
Collapse
|
47
|
Estimating time-varying effects for overdispersed recurrent events data with treatment switching. Biometrika 2013; 100:339-354. [PMID: 24465031 DOI: 10.1093/biomet/ass091] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In the analysis of multivariate event times, frailty models assuming time-independent regression coefficients are often considered, mainly due to their mathematical convenience. In practice, regression coefficients are often time dependent and the temporal effects are of clinical interest. Motivated by a phase III clinical trial in multiple sclerosis, we develop a semiparametric frailty modelling approach to estimate time-varying effects for overdispersed recurrent events data with treatment switching. The proposed model incorporates the treatment switching time in the time-varying coefficients. Theoretical properties of the proposed model are established and an efficient expectation-maximization algorithm is derived to obtain the maximum likelihood estimates. Simulation studies evaluate the numerical performance of the proposed model under various temporal treatment effect curves. The ideas in this paper can also be used for time-varying coefficient frailty models without treatment switching as well as for alternative models when the proportional hazard assumption is violated. A multiple sclerosis dataset is analysed to illustrate our methodology.
Collapse
|
48
|
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis. GRAPHICAL MODELS 2012; 74:361-372. [PMID: 24976795 PMCID: PMC4068644 DOI: 10.1016/j.gmod.2012.05.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2011] [Revised: 04/10/2012] [Accepted: 05/15/2012] [Indexed: 06/03/2023]
Abstract
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Collapse
|
49
|
A SIEVE M-THEOREM FOR BUNDLED PARAMETERS IN SEMIPARAMETRIC MODELS, WITH APPLICATION TO THE EFFICIENT ESTIMATION IN A LINEAR MODEL FOR CENSORED DATA. Ann Stat 2011; 39:2795-3443. [PMID: 24436500 PMCID: PMC3890689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In many semiparametric models that are parameterized by two types of parameters - a Euclidean parameter of interest and an infinite-dimensional nuisance parameter, the two parameters are bundled together, i.e., the nuisance parameter is an unknown function that contains the parameter of interest as part of its argument. For example, in a linear regression model for censored survival data, the unspecified error distribution function involves the regression coefficients. Motivated by developing an efficient estimating method for the regression parameters, we propose a general sieve M-theorem for bundled parameters and apply the theorem to deriving the asymptotic theory for the sieve maximum likelihood estimation in the linear regression model for censored survival data. The numerical implementation of the proposed estimating method can be achieved through the conventional gradient-based search algorithms such as the Newton-Raphson algorithm. We show that the proposed estimator is consistent and asymptotically normal and achieves the semiparametric efficiency bound. Simulation studies demonstrate that the proposed method performs well in practical settings and yields more efficient estimates than existing estimating equation based methods. Illustration with a real data example is also provided.
Collapse
|