618 resultados para WLT Estimators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The distribution of the number of heterozygous loci in two randomly chosen gametes or in a random diploid zygote provides information regarding the nonrandom association of alleles among different genetic loci. Two alternative statistics may be employed for detection of nonrandom association of genes of different loci when observations are made on these distributions: observed variance of the number of heterozygous loci (s2k) and a goodness-of-fit criterion (X2) to contrast the observed distribution with that expected under the hypothesis of random association of genes. It is shown, by simulation, that s2k is statistically more efficient than X2 to detect a given extent of nonrandom association. Asymptotic normality of s2k is justified, and X2 is shown to follow a chi-square (chi 2) distribution with partial loss of degrees of freedom arising because of estimation of parameters from the marginal gene frequency data. Whenever direct evaluations of linkage disequilibrium values are possible, tests based on maximum likelihood estimators of linkage disequilibria require a smaller sample size (number of zygotes or gametes) to detect a given level of nonrandom association in comparison with that required if such tests are conducted on the basis of s2k. Summarization of multilocus genotype (or haplotype) data, into the different number of heterozygous loci classes, thus, amounts to appreciable loss of information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stata is a general purpose software package that has become popular among various disciplines such as epidemiology, economics, or social sciences. Users like Stata for its scientific approach, its robustness and reliability, and the ease with which its functionality can be extended by user written programs. In this talk I will first give a brief overview of the functionality of Stata and then discuss two specific features: survey estimation and predictive margins/marginal effects. Most surveys are based on complex samples that contain multiple sampling stages, are stratified or clustered, and feature unequal selection probabilities. Standard estimators can produce misleading results in such samples unless the peculiarities of the sampling plan are taken into account. Stata offers survey statistics for complex samples for a wide variety of estimators and supports several variance estimation procedures such as linearization, jackknife, and balanced repeated replication (see Kreuter and Valliant, 2007, Stata Journal 7: 1-21). In the talk I will illustrate these features using applied examples and I will also show how user written commands can be adapted to support complex samples. Complex can also be the models we fit to our data, making it difficult to interpret them, especially in case of nonlinear or non-additive models (Mood, 2010, European Sociological Review 26: 67-82). Stata provides a number of highly useful commands to make results of such models accessible by computing and displaying predictive margins and marginal effects. In my talk I will discuss these commands provide various examples demonstrating their use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A graphing method was developed and tested to estimate gestational ages pre-and postnatally in a consistent manner for epidemiological research and clinical purposes on feti/infants of women with few consistent prenatal estimators of gestational age. Each patient's available data was plotted on a single page graph to give a comprehensive overview of that patient. A hierarchical classification of gestational age determination was then applied in a systematic manner, and reasonable gestational age estimates were produced. The method was tested for validity and reliability on 50 women who had known dates for their last menstrual period or dates of conception, and multiple ultrasound examinations and other gestational age estimating measures. The feasibility of the procedure was then tested on 1223 low income women with few gestational age estimators. The graphing method proved to have high inter- and intrarater reliability. It was quick, easy to use, inexpensive, and did not require special equipment. The graphing method estimate of gestational age for each infant was tested against the last menstrual period gestational age estimate using paired t-Tests, F tests and the Kolmogorov-Smirnov test of similar populations, producing a 98 percent probability or better that the means and data populations were the same. Less than 5 percent of the infants' gestational ages were misclassified using the graphing method, much lower than the amount of misclassification produced by ultrasound or neonatal examination estimates. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present local stereological estimators of Minkowski tensors defined on convex bodies in ℝ d . Special cases cover a number of well-known local stereological estimators of volume and surface area in ℝ3, but the general set-up also provides new local stereological estimators of various types of centres of gravity and tensors of rank two. Rank two tensors can be represented as ellipsoids and contain information about shape and orientation. The performance of some of the estimators of centres of gravity and volume tensors of rank two is investigated by simulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the econometrics literature has shown a growing interest in the study of partially identified models, in which the object of economic and statistical interest is a set rather than a point. The characterization of this set and the development of consistent estimators and inference procedures for it with desirable properties are the main goals of partial identification analysis. This review introduces the fundamental tools of the theory of random sets, which brings together elements of topology, convex geometry, and probability theory to develop a coherent mathematical framework to analyze random elements whose realizations are sets. It then elucidates how these tools have been fruitfully applied in econometrics to reach the goals of partial identification analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This survey provides a self-contained account of M-estimation of multivariate scatter. In particular, we present new proofs for existence of the underlying M-functionals and discuss their weak continuity and differentiability. This is done in a rather general framework with matrix-valued random variables. By doing so we reveal a connection between Tyler's (1987) M-functional of scatter and the estimation of proportional covariance matrices. Moreover, this general framework allows us to treat a new class of scatter estimators, based on symmetrizations of arbitrary order. Finally these results are applied to M-estimation of multivariate location and scatter via multivariate t-distributions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Children typically hold very optimistic views of their own skills but so far, only a few studies have investigated possible correlates of the ability to predict performance accurately. Therefore, this study examined the role of individual differences in performance estimation accuracy as a global metacognitive index for different monitoring and control skills (item-level judgments of learning [JOLs] and confidence judgments [CJs]), metacognitive control processes (allocation of study time and control of answers), and executive functions (cognitive flexibility, inhibition, working memory) in 6-year-olds (N=93). The three groups of under estimators, realists and over estimators differed significantly in their monitoring and control abilities: the under estimators outperformed the over estimators by showing a higher discrimination in CJs between correct and incorrect recognition. Also, the under estimators scored higher on the adequate control of incorrectly recognized items. Regarding the interplay of monitoring and control processes, under estimators spent more time studying items with low JOLs, and relied more systematically on their monitoring when controlling their recognition compared to over estimators. At the same time, the three groups did not differ significantly from each other in their executive functions. Overall, results indicate that differences in performance estimation accuracy are systematically related to other global and item-level metacognitive monitoring and control abilities in children as young as six years of age, while no meaningful association between performance estimation accuracy and executive functions was found.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: In an artificial pancreas (AP), the meals are either manually announced or detected and their size estimated from the blood glucose level. Both methods have limitations, which result in suboptimal postprandial glucose control. The GoCARB system is designed to provide the carbohydrate content of meals and is presented within the AP framework. Method: The combined use of GoCARB with a control algorithm is assessed in a series of 12 computer simulations. The simulations are defined according to the type of the control (open or closed loop), the use or not-use of GoCARB and the diabetics’ skills in carbohydrate estimation. Results: For bad estimators without GoCARB, the percentage of the time spent in target range (70-180 mg/dl) during the postprandial period is 22.5% and 66.2% for open and closed loop, respectively. When the GoCARB is used, the corresponding percentages are 99.7% and 99.8%. In case of open loop, the time spent in severe hypoglycemic events (<50 mg/dl) is 33.6% without the GoCARB and is reduced to 0.0% when the GoCARB is used. In case of closed loop, the corresponding percentage is 1.4% without the GoCARB and is reduced to 0.0% with the GoCARB. Conclusion: The use of GoCARB improves the control of postprandial response and glucose profiles especially in the case of open loop. However, the most efficient regulation is achieved by the combined use of the control algorithm and the GoCARB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper considers panel data methods for estimating ordered logit models with individual-specific correlated unobserved heterogeneity. We show that a popular approach is inconsistent, whereas some consistent and efficient estimators are available, including minimum distance and generalized method-of-moment estimators. A Monte Carlo study reveals the good properties of an alternative estimator that has not been considered in econometric applications before, is simple to implement and almost as efficient. An illustrative application based on data from the German Socio-Economic Panel confirms the large negative effect of unemployment on life satisfaction that has been found in the previous literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles. We focus on the case where particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle shape and orientation, and we derive stereological estimators of the tensors. These estimators are combined to provide consistent estimators of the moments of the so-called particle cover density. The covariance structure associated with the particle cover density depends on the orientation and shape of the particles. For instance, if the distribution of the typical particle is invariant under rotations, then the covariance matrix is proportional to the identity matrix. We develop a non-parametric test for such isotropy. A flexible Lévy-based particle model is proposed, which may be analysed using a generalized method of moments in which the volume tensors enter. The developed methods are used to study the cell organization in the human brain cortex.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of nonparametric estimation of a concave regression function F. We show that the supremum distance between the least square s estimatorand F on a compact interval is typically of order(log(n)/n)2/5. This entails rates of convergence for the estimator’s derivative. Moreover, we discuss the impact of additional constraints on F such as monotonicity and pointwise bounds. Then we apply these results to the analysis of current status data, where the distribution function of the event times is assumed to be concave.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present new algorithms for M-estimators of multivariate scatter and location and for symmetrized M-estimators of multivariate scatter. The new algorithms are considerably faster than currently used fixed-point and related algorithms. The main idea is to utilize a second order Taylor expansion of the target functional and to devise a partial Newton-Raphson procedure. In connection with symmetrized M-estimators we work with incomplete U-statistics to accelerate our procedures initially.