989 resultados para Statistical variance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent analysis of the Arctic Oscillation (AO) in the stratosphere and troposphere has suggested that predictability of the state of the tropospheric AO may be obtained from the state of the stratospheric AO. However, much of this research has been of a purely qualitative nature. We present a more thorough statistical analysis of a long AO amplitude dataset which seeks to establish the magnitude of such a link. A relationship between the AO in the lower stratosphere and on the 1000 hPa surface on a 10-45 day time-scale is revealed. The relationship accounts for 5% of the variance of the 1000 hPa time series at its peak value and is significant at the 5% level. Over a similar time-scale the 1000 hPa time series accounts for 1% of itself and is not significant at the 5% level. Further investigation of the relationship reveals that it is only present during the winter season and in particular during February and March. It is also demonstrated that using stratospheric AO amplitude data as a predictor in a simple statistical model results in a gain of skill of 5% over a troposphere-only statistical model. This gain in skill is not repeated if an unrelated time series is included as a predictor in the model. Copyright © 2003 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The jackknife method is often used for variance estimation in sample surveys but has only been developed for a limited class of sampling designs.We propose a jackknife variance estimator which is defined for any without-replacement unequal probability sampling design. We demonstrate design consistency of this estimator for a broad class of point estimators. A Monte Carlo study shows how the proposed estimator may improve on existing estimators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is common practice to design a survey with a large number of strata. However, in this case the usual techniques for variance estimation can be inaccurate. This paper proposes a variance estimator for estimators of totals. The method proposed can be implemented with standard statistical packages without any specific programming, as it involves simple techniques of estimation, such as regression fitting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The systematic sampling (SYS) design (Madow and Madow, 1944) is widely used by statistical offices due to its simplicity and efficiency (e.g., Iachan, 1982). But it suffers from a serious defect, namely, that it is impossible to unbiasedly estimate the sampling variance (Iachan, 1982) and usual variance estimators (Yates and Grundy, 1953) are inadequate and can overestimate the variance significantly (Särndal et al., 1992). We propose a novel variance estimator which is less biased and that can be implemented with any given population order. We will justify this estimator theoretically and with a Monte Carlo simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the Hájek (Ann. Math Statist. (1964) 1491) variance estimator can be used to estimate the variance of the Horvitz–Thompson estimator when the Chao sampling scheme (Chao, Biometrika 69 (1982) 653) is implemented. This estimator is simple and can be implemented with any statistical packages. We consider a numerical and an analytic method to show that this estimator can be used. A series of simulations supports our findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that an analysis of the mean and variance of discrete wavelet coefficients of coaveraged time-domain interferograms can be used as a specification for determining when to stop coaveraging. We also show that, if a prediction model built in the wavelet domain is used to determine the composition of unknown samples, a stopping criterion for the coaveraging process can be developed with respect to the uncertainty tolerated in the prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of projected climate change on wine production was analysed for the Demarcated Region of Douro, Portugal. A statistical grapevine yield model (GYM) was developed using climate parameters as predictors. Statistically significant correlations were identified between annual yield and monthly mean temperatures and monthly precipitation totals during the growing cycle. These atmospheric factors control grapevine yield in the region, with the GYM explaining 50.4% of the total variance in the yield time series in recent decades. Anomalously high March rainfall (during budburst, shoot and inflorescence development) favours yield, as well as anomalously high temperatures and low precipitation amounts in May and June (May: flowering and June: berry development). The GYM was applied to a regional climate model output, which was shown to realistically reproduce the GYM predictors. Finally, using ensemble simulations under the A1B emission scenario, projections for GYM-derived yield in the Douro Region, and for the whole of the twenty-first century, were analysed. A slight upward trend in yield is projected to occur until about 2050, followed by a steep and continuous increase until the end of the twenty-first century, when yield is projected to be about 800 kg/ha above current values. While this estimate is based on meteorological parameters alone, changes due to elevated CO2 may further enhance this effect. In spite of the associated uncertainties, it can be stated that projected climate change may significantly benefit wine yield in the Douro Valley.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A dynamical wind-wave climate simulation covering the North Atlantic Ocean and spanning the whole 21st century under the A1B scenario has been compared with a set of statistical projections using atmospheric variables or large scale climate indices as predictors. As a first step, the performance of all statistical models has been evaluated for the present-day climate; namely they have been compared with a dynamical wind-wave hindcast in terms of winter Significant Wave Height (SWH) trends and variance as well as with altimetry data. For the projections, it has been found that statistical models that use wind speed as independent variable predictor are able to capture a larger fraction of the winter SWH inter-annual variability (68% on average) and of the long term changes projected by the dynamical simulation. Conversely, regression models using climate indices, sea level pressure and/or pressure gradient as predictors, account for a smaller SWH variance (from 2.8% to 33%) and do not reproduce the dynamically projected long term trends over the North Atlantic. Investigating the wind-sea and swell components separately, we have found that the combination of two regression models, one for wind-sea waves and another one for the swell component, can improve significantly the wave field projections obtained from single regression models over the North Atlantic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of recent works have introduced statistical methods for detecting genetic loci that affect phenotypic variability, which we refer to as variability-controlling quantitative trait loci (vQTL). These are genetic variants whose allelic state predicts how much phenotype values will vary about their expected means. Such loci are of great potential interest in both human and non-human genetic studies, one reason being that a detected vQTL could represent a previously undetected interaction with other genes or environmental factors. The simultaneous publication of these new methods in different journals has in many cases precluded opportunity for comparison. We survey some of these methods, the respective trade-offs they imply, and the connections between them. The methods fall into three main groups: classical non-parametric, fully parametric, and semi-parametric two-stage approximations. Choosing between alternatives involves balancing the need for robustness, flexibility, and speed. For each method, we identify important assumptions and limitations, including those of practical importance, such as their scope for including covariates and random effects. We show in simulations that both parametric methods and their semi-parametric approximations can give elevated false positive rates when they ignore mean-variance relationships intrinsic to the data generation process. We conclude that choice of method depends on the trait distribution, the need to include non-genetic covariates, and the population size and structure, coupled with a critical evaluation of how these fit with the assumptions of the statistical model.