898 resultados para Nonparametric Estimators
Resumo:
We use the elliptic reconstruction technique in combination with a duality approach to prove a posteriori error estimates for fully discrete backward Euler scheme for linear parabolic equations. As an application, we combine our result with the residual based estimators from the a posteriori estimation for elliptic problems to derive space-error indicators and thus a fully practical version of the estimators bounding the error in the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$ norm. These estimators, which are of optimal order, extend those introduced by Eriksson and Johnson in 1991 by taking into account the error induced by the mesh changes and allowing for a more flexible use of the elliptic estimators. For comparison with previous results we derive also an energy-based a posteriori estimate for the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$-error which simplifies a previous one given by Lakkis and Makridakis in 2006. We then compare both estimators (duality vs. energy) in practical situations and draw conclusions.
Resumo:
A new sparse kernel density estimator with tunable kernels is introduced within a forward constrained regression framework whereby the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Based on the minimum integrated square error criterion, a recursive algorithm is developed to select significant kernels one at time, and the kernel width of the selected kernel is then tuned using the gradient descent algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing very sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
Purpose: This study evaluated and compared in vitro the microstructure and mineral composition of permanent and deciduous teeth`s dental enamel. Methods: Sound third molars (n = 12) and second primary molars (n = 12) were selected and randomly assigned to the following groups, according to the analysis method performed (n = 4): Scanning electron microscopy (SEM), X-Ray diffraction (XRD) and Energy dispersive X-ray spectrometer (EDS). Qualitative and quantitative comparisons of the dental enamel were done. The microscopic findings were analyzed statistically by a nonparametric test (Kruskal-Wallis). The measurements of the prisms number and thickness were done in SEM photomicrographs. The relative amounts of calcium (Ca) and phosphorus (P) were determined by EDS investigation. Chemical phases present in both types of teeth were observed by the XRD analysis. Results: The mean thickness measurements observed in the deciduous teeth enamel was 1.14 mm and in the permanent teeth enamel was 2.58 mm. The mean rod head diameter in deciduous teeth was statistically similar to that of permanent teeth enamel, and a slightly decrease from the outer enamel surface to the region next to the enamel-dentine junction was assessed. The numerical density of enamel rods was higher in the deciduous teeth, mainly near EDJ, that showed statistically significant difference. The percentage of Ca and P was higher in the permanent teeth enamel. Conclusions: The primary enamel structure showed a lower level of Ca and P, thinner thickness and higher numerical density of rods. Microsc. Res. Tech. 73:572-577, 2010. (C) 2009 Wiley-Liss. Inc.
Resumo:
The present study aimed to evaluate whether the association between a calcium hydroxide paste (Calen paste) and 0.4% chlorhexidine (CHX) affects the development of the osteogenic phenotype in vitro. With rat calvarial osteogenic cell cultures, the following parameters were assayed: cell morphology and viability, alkaline phosphatase activity, total protein content, bone sialoprotein immunolocalization, and mineralized nodule formation. Comparisons were carried out by using the nonparametric Kruskal-Wallis test (level of significance, 5%). The results showed that the association between Calen paste and 0.4% CHX did not affect the development of the osteogenic phenotype. No significant changes were observed in terms of cell shape, cell viability, alkaline phosphatase activity, and the total amount of bone-like nodule formation among control, Calen, or Calen + CHX groups. The strategy to combine Ca(OH)(2) and CHX to promote a desirable synergistic antibacterial effect during endodontic treatment in vivo might not significantly affect osteoblastic cell biology. (J Endod 2008;34:1485-1489)
Resumo:
BACKGROUND: Previous pooled analyses have reported an association between magnetic fields and childhood leukaemia. We present a pooled analysis based on primary data from studies on residential magnetic fields and childhood leukaemia published after 2000. METHODS: Seven studies with a total of 10 865 cases and 12 853 controls were included. The main analysis focused on 24-h magnetic field measurements or calculated fields in residences. RESULTS: In the combined results, risk increased with increase in exposure, but the estimates were imprecise. The odds ratios for exposure categories of 0.1-0.2 mu T, 0.2-0.3 mu T and >= 0.3 mu T, compared with <0.1 mu T, were 1.07 (95% Cl 0.81-1.41), 1.16 (0.69-1.93) and 1.44 (0.88-2.36), respectively. Without the most influential study from Brazil, the odds ratios increased somewhat. An increasing trend was also suggested by a nonparametric analysis conducted using a generalised additive model. CONCLUSIONS: Our results are in line with previous pooled analyses showing an association between magnetic fields and childhood leukaemia. Overall, the association is weaker in the most recently conducted studies, but these studies are small and lack methodological improvements needed to resolve the apparent association. We conclude that recent studies on magnetic fields and childhood leukaemia do not alter the previous assessment that magnetic fields are possibly carcinogenic. British Journal of Cancer (2010) 103, 1128-1135. doi: 10.1038/sj.bjc.6605838 www.bjcancer.com (c) 2010 Cancer Research UK
Resumo:
We studied, for the first time, the near-infrared, stellar and baryonic Tully-Fisher relations for a sample of field galaxies taken from a homogeneous Fabry-Perot sample of galaxies [the Gassendi HAlpha survey of SPirals (GHASP) survey]. The main advantage of GHASP over other samples is that the maximum rotational velocities were estimated from 2D velocity fields, avoiding assumptions about the inclination and position angle of the galaxies. By combining these data with 2MASS photometry, optical colours, HI masses and different mass-to-light ratio estimators, we found a slope of 4.48 +/- 0.38 and 3.64 +/- 0.28 for the stellar and baryonic Tully-Fisher relation, respectively. We found that these values do not change significantly when different mass-to-light ratio recipes were used. We also point out, for the first time, that the rising rotation curves as well as asymmetric rotation curves show a larger dispersion in the Tully-Fisher relation than the flat ones or the symmetric ones. Using the baryonic mass and the optical radius of galaxies, we found that the surface baryonic mass density is almost constant for all the galaxies of this sample. In this study we also emphasize the presence of a break in the NIR Tully-Fisher relation at M(H,K) similar to -20 and we confirm that late-type galaxies present higher total-to-baryonic mass ratios than early-type spirals, suggesting that supernova feedback is actually an important issue in late-type spirals. Due to the well-defined sample selection criteria and the homogeneity of the data analysis, the Tully-Fisher relation for GHASP galaxies can be used as a reference for the study of this relation in other environments and at higher redshifts.
Resumo:
In this paper we deal with robust inference in heteroscedastic measurement error models Rather than the normal distribution we postulate a Student t distribution for the observed variables Maximum likelihood estimates are computed numerically Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels Results of simulations and an application to a real data set are also reported (C) 2009 The Korean Statistical Society Published by Elsevier B V All rights reserved
Resumo:
We consider bipartitions of one-dimensional extended systems whose probability distribution functions describe stationary states of stochastic models. We define estimators of the information shared between the two subsystems. If the correlation length is finite, the estimators stay finite for large system sizes. If the correlation length diverges, so do the estimators. The definition of the estimators is inspired by information theory. We look at several models and compare the behaviors of the estimators in the finite-size scaling limit. Analytical and numerical methods as well as Monte Carlo simulations are used. We show how the finite-size scaling functions change for various phase transitions, including the case where one has conformal invariance.
Resumo:
There has been great interest in deciding whether a combinatorial structure satisfies some property, or in estimating the value of some numerical function associated with this combinatorial structure, by considering only a randomly chosen substructure of sufficiently large, but constant size. These problems are called property testing and parameter testing, where a property or parameter is said to be testable if it can be estimated accurately in this way. The algorithmic appeal is evident, as, conditional on sampling, this leads to reliable constant-time randomized estimators. Our paper addresses property testing and parameter testing for permutations in a subpermutation perspective; more precisely, we investigate permutation properties and parameters that can be well approximated based on a randomly chosen subpermutation of much smaller size. In this context, we use a theory of convergence of permutation sequences developed by the present authors [C. Hoppen, Y. Kohayakawa, C.G. Moreira, R.M. Sampaio, Limits of permutation sequences through permutation regularity, Manuscript, 2010, 34pp.] to characterize testable permutation parameters along the lines of the work of Borgs et al. [C. Borgs, J. Chayes, L Lovasz, V.T. Sos, B. Szegedy, K. Vesztergombi, Graph limits and parameter testing, in: STOC`06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, ACM, New York, 2006, pp. 261-270.] in the case of graphs. Moreover, we obtain a permutation result in the direction of a famous result of Alon and Shapira [N. Alon, A. Shapira, A characterization of the (natural) graph properties testable with one-sided error, SIAM J. Comput. 37 (6) (2008) 1703-1727.] stating that every hereditary graph property is testable. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1-39.], and (ii) an approximation to the one proposed by Barndorff-Nielsen [Barndorff-Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343-365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33-53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655-661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff-Nielsen`s adjustment.
Resumo:
In this paper, a novel statistical test is introduced to compare two locally stationary time series. The proposed approach is a Wald test considering time-varying autoregressive modeling and function projections in adequate spaces. The covariance structure of the innovations may be also time- varying. In order to obtain function estimators for the time- varying autoregressive parameters, we consider function expansions in splines and wavelet bases. Simulation studies provide evidence that the proposed test has a good performance. We also assess its usefulness when applied to a financial time series.
Resumo:
In this paper we extend partial linear models with normal errors to Student-t errors Penalized likelihood equations are applied to derive the maximum likelihood estimates which appear to be robust against outlying observations in the sense of the Mahalanobis distance In order to study the sensitivity of the penalized estimates under some usual perturbation schemes in the model or data the local influence curvatures are derived and some diagnostic graphics are proposed A motivating example preliminary analyzed under normal errors is reanalyzed under Student-t errors The local influence approach is used to compare the sensitivity of the model estimates (C) 2010 Elsevier B V All rights reserved