930 resultados para kernel estimates


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimates of phenotypic, genetics and residual variances for reproductive traits in 5903 Nellore bulls were obtained. The experimental model used was multiple trait derivative-free restricted maximum likelihood. The values obtained for heritability were 0.24 +/- 0.05 for scrotal circumference at 450 days of age and 0.37 +/- 0.05 at 21 months for age at the time of the breeding soundness evaluation; 0.24 +/- 0.05 and 0.26 +/- 0.05 for left and right testicle length; 0.29 +/- 0.05 and 0.31 +/- 0.05 for left and right testicle width; 0.12 +/- 0.04 for testicle format; 0.33 +/- 0.06 for testicle volume; 0.11 +/- 0.03 for gross motility; 0.08 +/- 0.03 for individual motility and 0.05 +/- 0.02 for spermatic vigor; 0.20 +/- 0.04, 0.03 +/- 0.02 and 0.19 +/- 0.04 for larger defects, smaller defects and total defects, respectively. The values for heritability for testicular biometric characteristics were moderate to high while the seminal characteristics, presented low values. Genetic correlations between scrotal circumference with all the reproductive traits were favorable, suggesting the scrotal circumference as a feature of choice in the selection of bulls.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this paper is to derive long time estimates of the energy for the higher order hyperbolic equations with time-dependent coefficients. in particular, we estimate the energy in the hyperbolic zone of the extended phase space by means of a function f (t) which depends on the principal part and on the coefficients of the terms of order m - 1. Then we look for sufficient conditions that guarantee the same energy estimate from above in all the extended phase space. We call this class of estimates hyperbolic-like since the energy behavior is deeply depending on the hyperbolic structure of the equation. In some cases, these estimates produce a dissipative effect on the energy. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to estimate the prevalence of inadequate micronutrient intake and excess sodium intake among adults age 19 years and older in the city of Sao Paulo, Brazil. Twenty-four hour dietary recall and sociodemographic data were collected from each participant (n=1,663) in a cross-sectional study, Inquiry of Health of Sao Paulo, of a representative sample of the adult population of the city of Sao Paulo in 2003 (ISA-2003). The variability in intake was measured through two replications of the 24-hour recall in a subsample of this population in 2007 (ISA-2007). Usual intake was estimated by the PC-SIDE program (version 1.0, 2003, Department of Statistics, Iowa State University), which uses an approach developed by Iowa State University. The prevalence of nutrient inadequacy was calculated using the Estimated Average Requirement cut-point method for vitamins A and C, thiamin, riboflavin, niacin, copper, phosphorus, and selenium. For vitamin D, pantothenic acid, manganese, and sodium, the proportion of individuals with usual intake equal to or more than the Adequate Intake value was calculated. The percentage of individuals with intake equal to more than the Tolerable Upper Intake Level was calculated for sodium. The highest prevalence of inadequacy for males and females, respectively, occurred for vitamin A (67% and 58%), vitamin C (52% and 62%), thiamin (41% and 50%), and riboflavin (29% and 19%). The adjustment for the within-person variation presented lower prevalence of inadequacy due to removal of within-person variability. All adult residents of Sao Paulo had excess sodium intake, and the rates of nutrient inadequacy were high for certain key micronutrients. J Acad Nutr Diet. 2012;112:1614-1618.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The method of steepest descent is used to study the integral kernel of a family of normal random matrix ensembles with eigenvalue distribution P-N (z(1), ... , z(N)) = Z(N)(-1)e(-N)Sigma(N)(i=1) V-alpha(z(i)) Pi(1 <= i<j <= N) vertical bar z(i) - z(j)vertical bar(2), where V-alpha(z) = vertical bar z vertical bar(alpha), z epsilon C and alpha epsilon inverted left perpendicular0, infinity inverted right perpendicular. Asymptotic formulas with error estimate on sectors are obtained. A corollary of these expansions is a scaling limit for the n-point function in terms of the integral kernel for the classical Segal-Bargmann space. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.3688293]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim Estimates of geographic range size derived from natural history museum specimens are probably biased for many species. We aim to determine how bias in these estimates relates to range size. Location We conducted computer simulations based on herbarium specimen records from localities ranging from the southern United States to northern Argentina. Methods We used theory on the sampling distribution of the mean and variance to develop working hypotheses about how range size, defined as area of occupancy (AOO), was related to the inter-specific distribution of: (1) mean collection effort per area across the range of a species (MC); (2) variance in collection effort per area across the range of a species (VC); and (3) proportional bias in AOO estimates (PBias: the difference between the expected value of the estimate of AOO and true AOO, divided by true AOO). We tested predictions from these hypotheses using computer simulations based on a dataset of more than 29,000 herbarium specimen records documenting occurrences of 377 plant species in the tribe Bignonieae (Bignoniaceae). Results The working hypotheses predicted that the mean of the inter-specific distribution of MC, VC and PBias were independent of AOO, but that the respective variance and skewness decreased with increasing AOO. Computer simulations supported all but one prediction: the variance of the inter-specific distribution of VC did not decrease with increasing AOO. Main conclusions Our results suggest that, despite an invariant mean, the dispersion and symmetry of the inter-specific distribution of PBias decreases as AOO increases. As AOO increased, range size was less severely underestimated for a large proportion of simulated species. However, as AOO increased, range size estimates having extremely low bias were less common.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oil content and grain yield in maize are negatively correlated, and so far the development of high-oil high-yielding hybrids has not been accomplished. Then a fully understand of the inheritance of the kernel oil content is necessary to implement a breeding program to improve both traits simultaneously. Conventional and molecular marker analyses of the design III were carried out from a reference population developed from two tropical inbred lines divergent for kernel oil content. The results showed that additive variance was quite larger than the dominance variance, and the heritability coefficient was very high. Sixteen QTL were mapped, they were not evenly distributed along the chromosomes, and accounted for 30.91% of the genetic variance. The average level of dominance computed from both conventional and QTL analysis was partial dominance. The overall results indicated that the additive effects were more important than the dominance effects, the latter were not unidirectional and then heterosis could not be exploited in crosses. Most of the favorable alleles of the QTL were in the high-oil parental inbred, which could be transferred to other inbreds via marker-assisted backcross selection. Our results coupled with reported information indicated that the development of high-oil hybrids with acceptable yields could be accomplished by using marker-assisted selection involving oil content, grain yield and its components. Finally, to exploit the xenia effect to increase even more the oil content, these hybrids should be used in the Top Cross((TM)) procedure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an analytic description of numerical results for the Landau-gauge SU(2) gluon propagator D(p(2)), obtained from lattice simulations (in the scaling region) for the largest lattice sizes to date, in d = 2, 3 and 4 space-time dimensions. Fits to the gluon data in 3d and in 4d show very good agreement with the tree-level prediction of the refined Gribov-Zwanziger (RGZ) framework, supporting a massive behavior for D(p(2)) in the infrared limit. In particular, we investigate the propagator's pole structure and provide estimates of the dynamical mass scales that can be associated with dimension-two condensates in the theory. In the 2d case, fitting the data requires a noninteger power of the momentum p in the numerator of the expression for D(p(2)). In this case, an infinite-volume-limit extrapolation gives D(0) = 0. Our analysis suggests that this result is related to a particular symmetry in the complex-pole structure of the propagator and not to purely imaginary poles, as would be expected in the original Gribov-Zwanziger scenario.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Statistical methods for estimating usual intake require at least two short-term dietary measurements in a subsample of the target population. However, the percentage of individuals with a second dietary measurement (replication rate) may influence the precision of estimates, such as percentiles and proportions of individuals below cut-offs of intake. Objective To investigate the precision of the usual food intake estimates using different replication rates and different sample sizes. Participants/setting Adolescents participating in the continuous National Health and Nutrition Examination Survey 2007-2008 (n=1,304) who completed two 24-hour recalls. Statistical analyses performed The National Cancer Institute method was used to estimate the usual intake of dark green vegetables in the original sample comprising 1,304 adolescents with a replication rate of 100%. A bootstrap with 100 replications was performed to estimate CIs for percentiles and proportions of individuals below cut-offs of intake. Using the same bootstrap replications, four sets of data sets were sampled with different replication rates (80%, 60%, 40%, and 20%). For each data set created, the National Cancer Institute method was performed and percentiles, Cl, and proportions of individuals below cut-offs were calculated. Precision estimates were checked by comparing each Cl obtained from data sets with different replication rates with the Cl obtained from original data set. Further, we sampled 1,000, 750, 500, and 250 individuals from the original data set, and performed the same analytical procedures. Results Percentiles of intake and percentage of individuals below the cut-off points were similar throughout the replication rates and sample sizes, but the Cl increased as the replication rate decreased. Wider CIs were observed at 40% and 20% of replication rate. Conclusions The precision of the usual intake estimates decreased when low replication rates were used. However, even with different sample sizes, replication rates >40% may not lead to an important loss of precision. J Acad Nutr Diet. 2012;112:1015-1020.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze reproducing kernel Hilbert spaces of positive definite kernels on a topological space X being either first countable or locally compact. The results include versions of Mercer's theorem and theorems on the embedding of these spaces into spaces of continuous and square integrable functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the influence of a secondary variable as a function of the correlation with the primary variable for collocated cokriging is examined. For this study five exhaustive data sets were generated in computer, from which samples with 60 and 104 data points were drawn using the stratified random sampling method. These exhaustive data sets were generated departing from a pair of primary and secondary variables showing a good correlation. Then successive sets were generated by adding an amount of white noise in such a way that the correlation gets poorer. Using these samples, it was possible to find out how primary and secondary information is used to estimate an unsampled location according to the correlation level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chlorophyll determination with a portable chlorophyll meter can indicate the period of highest N demand of plants and whether sidedressing is required or not. In this sense, defining the optimal timing of N application to common bean is fundamental to increase N use efficiency, increase yields and reduce the cost of fertilization. The objectives of this study were to evaluate the efficiency of N sufficiency index (NSI) calculated based on the relative chlorophyll index (RCI) in leaves, measured with a portable chlorophyll meter, as an indicator of time of N sidedressing fertilization and to verify which NSI (90 and 95 %) value is the most appropriate to indicate the moment of N fertilization of common bean cultivar Perola. The experiment was carried out in the rainy and dry growing seasons of the agricultural year 2009/10 on a dystroferric Red Nitosol, in Botucatu, São Paulo State, Brazil. The experiment was arranged in a randomized complete block design with five treatments, consisting of N managements (M1: 200 kg ha-1 N (40 kg at sowing + 80 kg 15 days after emergence (DAE) + 80 kg 30 DAE); M2: 100 kg ha-1 N (20 kg at sowing + 40 kg 15 DAE + 40 kg 30 DAE); M3: 20 kg ha-1 N at sowing + 30 kg ha-1 when chlorophyll meter readings indicated NSI < 95 %; M4: 20 kg ha-1 N at sowing + 30 kg ha-1 N when chlorophyll meter readings indicated NSI < 90 % and, M5: control (without N application)) and four replications. The variables RCI, aboveground dry matter, total leaf N concentration, production components, grain yield, relative yield, and N use efficiency were evaluated. The RCI correlated with leaf N concentrations. By monitoring the RCI with the chlorophyll meter, the period of N sidedressing of common bean could be defined, improving N use efficiency and avoiding unnecessary N supply to common bean. The NSI 90 % of the reference area was more efficient to define the moment of N sidedressing of common bean, to increase N use efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the action of a weighted Fourier–Laplace transform on the functions in the reproducing kernel Hilbert space (RKHS) associated with a positive definite kernel on the sphere. After defining a notion of smoothness implied by the transform, we show that smoothness of the kernel implies the same smoothness for the generating elements (spherical harmonics) in the Mercer expansion of the kernel. We prove a reproducing property for the weighted Fourier–Laplace transform of the functions in the RKHS and embed the RKHS into spaces of smooth functions. Some relevant properties of the embedding are considered, including compactness and boundedness. The approach taken in the paper includes two important notions of differentiability characterized by weighted Fourier–Laplace transforms: fractional derivatives and Laplace–Beltrami derivatives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.