951 resultados para ratio estimators


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Proportion estimators are quite frequently used in many application areas. The conventional proportion estimator (number of events divided by sample size) encounters a number of problems when the data are sparse as will be demonstrated in various settings. The problem of estimating its variance when sample sizes become small is rarely addressed in a satisfying framework. Specifically, we have in mind applications like the weighted risk difference in multicenter trials or stratifying risk ratio estimators (to adjust for potential confounders) in epidemiological studies. It is suggested to estimate p using the parametric family (see PDF for character) and p(1 - p) using (see PDF for character), where (see PDF for character). We investigate the estimation problem of choosing c 0 from various perspectives including minimizing the average mean squared error of (see PDF for character), average bias and average mean squared error of (see PDF for character). The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be independent of n and equals c = 1. The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be dependent of n with limiting value c = 0.833. This might justifiy to use a near-optimal value of c = 1 in practice which also turns out to be beneficial when constructing confidence intervals of the form (see PDF for character).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the presence of limit oscillations in an adaptive sampling system. The basic sampling criterion operates in the sense that each next sampling occurs when the absolute difference of the signal amplitude with respect to its currently sampled signal equalizes a prescribed threshold amplitude. The sampling criterion is extended involving a prescribed set of amplitudes. The limit oscillations might be interpreted through the equivalence of the adaptive sampling and hold device with a nonlinear one consisting of a relay with multiple hysteresis whose parameterization is, in general, dependent on the initial conditions of the dynamic system. The performed study is performed on the time domain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bycatch, or the incidental catch of nontarget organisms during fi shing operations, is a major issue in U.S. shrimp trawl fisheries. Because bycatch is typically discarded at sea, total bycatch is usually estimated by extrapolating from an observed bycatch sample to the entire fleet with either mean-per-unit or ratio estimators. Using both field observations of commercial shrimp trawlers and computer simulations, I compared five methods for generating bycatch estimates that were used in past studies, a mean-per-unit estimator and four forms of the ratio estimator, respectively: 1) the mean fish catch per unit of effort, where unit effort was a proxy for sample size, 2) the mean of the individual fish to shrimp ratios, 3) the ratio of mean fish catch to mean shrimp catch, 4) the mean of the ratios of fish catch per time fished (a variable measure of effort), and 5) the ratio of mean fish catch per mean time fished. For field data, different methods used to estimate bycatch of Atlantic croaker, spot, and weakfish yielded extremely different results, with no discernible pattern in the estimates by method, geographic region, or species. Simulated fishing fleets were used to compare bycatch estimated by the fi ve methods with “actual” (simulated) bycatch. Simulations were conducted by using both normal and delta lognormal distributions of fish and shrimp and employed a range of values for several parameters, including mean catches of fish and shrimp, variability in the catches of fish and shrimp, variability in fishing effort, number of observations, and correlations between fish and shrimp catches. Results indicated that only the mean per unit estimators provided statistically unbiased estimates, while all other methods overestimated bycatch. The mean of the individual fish to shrimp ratios, the method used in the South Atlantic Bight before the 1990s, gave the most biased estimates. Because of the statistically significant two- and 3-way interactions among parameters, it is unlikely that estimates generated by one method can be converted or corrected to estimates made by another method: therefore bycatch estimates obtained with different methods should not be compared directly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We studied, for the first time, the near-infrared, stellar and baryonic Tully-Fisher relations for a sample of field galaxies taken from a homogeneous Fabry-Perot sample of galaxies [the Gassendi HAlpha survey of SPirals (GHASP) survey]. The main advantage of GHASP over other samples is that the maximum rotational velocities were estimated from 2D velocity fields, avoiding assumptions about the inclination and position angle of the galaxies. By combining these data with 2MASS photometry, optical colours, HI masses and different mass-to-light ratio estimators, we found a slope of 4.48 +/- 0.38 and 3.64 +/- 0.28 for the stellar and baryonic Tully-Fisher relation, respectively. We found that these values do not change significantly when different mass-to-light ratio recipes were used. We also point out, for the first time, that the rising rotation curves as well as asymmetric rotation curves show a larger dispersion in the Tully-Fisher relation than the flat ones or the symmetric ones. Using the baryonic mass and the optical radius of galaxies, we found that the surface baryonic mass density is almost constant for all the galaxies of this sample. In this study we also emphasize the presence of a break in the NIR Tully-Fisher relation at M(H,K) similar to -20 and we confirm that late-type galaxies present higher total-to-baryonic mass ratios than early-type spirals, suggesting that supernova feedback is actually an important issue in late-type spirals. Due to the well-defined sample selection criteria and the homogeneity of the data analysis, the Tully-Fisher relation for GHASP galaxies can be used as a reference for the study of this relation in other environments and at higher redshifts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characterization of thermocouple sensors for temperature measurement in varying-flow environments is a challenging problem. Recently, the authors introduced novel difference-equation-based algorithms that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. In particular, a linear least squares (LS) lambda formulation of the characterization problem, which yields unbiased estimates when identified using generalized total LS, was introduced. These algorithms assume that time constants do not change during operation and are, therefore, appropriate for temperature measurement in homogenous constant-velocity liquid or gas flows. This paper develops an alternative ß-formulation of the characterization problem that has the major advantage of allowing exploitation of a priori knowledge of the ratio of the sensor time constants, thereby facilitating the implementation of computationally efficient algorithms that are less sensitive to measurement noise. A number of variants of the ß-formulation are developed, and appropriate unbiased estimators are identified. Monte Carlo simulation results are used to support the analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical graphics are a fundamental, yet often overlooked, set of components in the repertoire of data analytic tools. Graphs are quick and efficient, yet simple instruments of preliminary exploration of a dataset to understand its structure and to provide insight into influential aspects of inference such as departures from assumptions and latent patterns. In this paper, we present and assess a graphical device for choosing a method for estimating population size in capture-recapture studies of closed populations. The basic concept is derived from a homogeneous Poisson distribution where the ratios of neighboring Poisson probabilities multiplied by the value of the larger neighbor count are constant. This property extends to the zero-truncated Poisson distribution which is of fundamental importance in capture–recapture studies. In practice however, this distributional property is often violated. The graphical device developed here, the ratio plot, can be used for assessing specific departures from a Poisson distribution. For example, simple contaminations of an otherwise homogeneous Poisson model can be easily detected and a robust estimator for the population size can be suggested. Several robust estimators are developed and a simulation study is provided to give some guidance on which should be used in practice. More systematic departures can also easily be detected using the ratio plot. In this paper, the focus is on Gamma mixtures of the Poisson distribution which leads to a linear pattern (called structured heterogeneity) in the ratio plot. More generally, the paper shows that the ratio plot is monotone for arbitrary mixtures of power series densities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.