989 resultados para Sample average approximation
Resumo:
Généralement, dans les situations d’hypothèses multiples on cherche à rejeter toutes les hypothèses ou bien une seule d’entre d’elles. Depuis quelques temps on voit apparaître le besoin de répondre à la question : « Peut-on rejeter au moins r hypothèses ? ». Toutefois, les outils statisques pour répondre à cette question sont rares dans la littérature. Nous avons donc entrepris de développer les formules générales de puissance pour les procédures les plus utilisées, soit celles de Bonferroni, de Hochberg et de Holm. Nous avons développé un package R pour le calcul de la taille échantilonnalle pour les tests à hypothèses multiples (multiple endpoints), où l’on désire qu’au moins r des m hypothèses soient significatives. Nous nous limitons au cas où toutes les variables sont continues et nous présentons quatre situations différentes qui dépendent de la structure de la matrice de variance-covariance des données.
Resumo:
This study is concerned with Autoregressive Moving Average (ARMA) models of time series. ARMA models form a subclass of the class of general linear models which represents stationary time series, a phenomenon encountered most often in practice by engineers, scientists and economists. It is always desirable to employ models which use parameters parsimoniously. Parsimony will be achieved by ARMA models because it has only finite number of parameters. Even though the discussion is primarily concerned with stationary time series, later we will take up the case of homogeneous non stationary time series which can be transformed to stationary time series. Time series models, obtained with the help of the present and past data is used for forecasting future values. Physical science as well as social science take benefits of forecasting models. The role of forecasting cuts across all fields of management-—finance, marketing, production, business economics, as also in signal process, communication engineering, chemical processes, electronics etc. This high applicability of time series is the motivation to this study.
Resumo:
Freehand sketching is both a natural and crucial part of design, yet is unsupported by current design automation software. We are working to combine the flexibility and ease of use of paper and pencil with the processing power of a computer to produce a design environment that feels as natural as paper, yet is considerably smarter. One of the most basic steps in accomplishing this is converting the original digitized pen strokes in the sketch into the intended geometric objects using feature point detection and approximation. We demonstrate how multiple sources of information can be combined for feature detection in strokes and apply this technique using two approaches to signal processing, one using simple average based thresholding and a second using scale space.
Resumo:
This document provides recent evidence about the persistency of wage gaps between formal and informal workers in Colombia by using a non-parametric method proposed by Ñopo (2008a). Over a rich dataset at a household level during 2008-2012, it is found that formal workers earn between 30 to 60 percent more, on average, than informal workers. Despite of the formality definition - structuralist or institucionalist- adopted, it is clear that formal workers have more economic advantages than informal ones, but after controlling by demographic and labor variables an important fraction of the gap still remains unexplained.
Resumo:
The performance of the SAOP potential for the calculation of NMR chemical shifts was evaluated. SAOP results show considerable improvement with respect to previous potentials, like VWN or BP86, at least for the carbon, nitrogen, oxygen, and fluorine chemical shifts. Furthermore, a few NMR calculations carried out on third period atoms (S, P, and Cl) improved when using the SAOP potential
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.
Resumo:
This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.
Resumo:
The determination of the amount of sample units that will compose the sample express the optimization of the workforce, and reduce errors inherent in the report of recommendation and evaluation of soil fertility. This study aimed to determine in three systems use and soil management, the numbers of units samples design, needed to form the composed sample, for evaluation of soil fertility. It was concluded that the number of sample units needed to compose the composed sample to determination the attributes of organic matter, pH, P, K, Ca, Mg, Al and H+Al and base saturation of soil vary by use and soil management and error acceptable to the mean estimate. For the same depth of collected, increasing the number of sample units, reduced the percentage error in estimating the average, allowing the recommendation of 14, 14 and 11 sample in management with native vegetation, pasture cultivation and corn, respectively, for a error 20% on the mean estimate.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.
Resumo:
This paper uses a multivariate response surface methodology to analyze the size distortion of the BDS test when applied to standardized residuals of rst-order GARCH processes. The results show that the asymptotic standard normal distribution is an unreliable approximation, even in large samples. On the other hand, a simple log-transformation of the squared standardized residuals seems to correct most of the size problems. Nonethe-less, the estimated response surfaces can provide not only a measure of the size distortion, but also more adequate critical values for the BDS test in small samples.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We propose a new statistic to control the covariance matrix of bivariate processes. This new statistic is based on the sample variances of the two quality characteristics, in short VMAX statistic. The points plotted on the chart correspond to the maximum of the values of these two variances. The reasons to consider the VMAX statistic instead of the generalized variance vertical bar S vertical bar is its faster detection of process changes and its better diagnostic feature; that is, with the VMAX statistic it is easier to identify the out-of-control variable. We study the double sampling (DS) and the exponentially weighted moving average (EWMA) charts based on the VMAX statistic. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
OBJETIVO: Realizar a adaptação transcultural da versão em português do Inventário de Burnout de Maslach para estudantes e investigar sua confiabilidade, validade e invariância transcultural. MÉTODOS: A validação de face envolveu participação de equipe multidisciplinar. Foi realizada validação de conteúdo. A versão em português foi preenchida em 2009, pela internet, por 958 estudantes universitários brasileiros e 556 portugueses da zona urbana. Realizou-se análise fatorial confirmatória utilizando-se como índices de ajustamento o χ²/df, o comparative fit index (CFI), goodness of fit index (GFI) e o root mean square error of approximation (RMSEA). Para verificação da estabilidade da solução fatorial conforme a versão original em inglês, realizou-se validação cruzada em 2/3 da amostra total e replicada no 1/3 restante. A validade convergente foi estimada pela variância extraída média e confiabilidade composta. Avaliou-se a validade discriminante e a consistência interna foi estimada pelo coeficiente alfa de Cronbach. A validade concorrente foi estimada por análise correlacional da versão em português e dos escores médios do Inventário de Burnout de Copenhague; a divergente foi comparada à Escala de Depressão de Beck. Foi avaliada a invariância do modelo entre a amostra brasileira e a portuguesa. RESULTADOS: O modelo trifatorial de Exaustão, Descrença e Eficácia apresentou ajustamento adequado (χ²/df = 8,498; CFI = 0,916; GFI = 0,902; RMSEA = 0,086). A estrutura fatorial foi estável (λ: χ²dif = 11,383, p = 0,50; Cov: χ²dif = 6,479, p = 0,372; Resíduos: χ²dif = 21,514, p = 0,121). Observou-se adequada validade convergente (VEM = 0,45;0,64, CC = 0,82;0,88), discriminante (ρ² = 0,06;0,33) e consistência interna (α = 0,83;0,88). A validade concorrente da versão em português com o Inventário de Copenhague foi adequada (r = 0,21;0,74). A avaliação da validade divergente do instrumento foi prejudicada pela aproximação do conceito teórico das dimensões Exaustão e Descrença da versão em português com a Escala de Beck. Não se observou invariância do instrumento entre as amostras brasileiras e portuguesas (λ:χ²dif = 84,768, p < 0,001; Cov: χ²dif = 129,206, p < 0,001; Resíduos: χ²dif = 518,760, p < 0,001). CONCLUSÕES: A versão em português do Inventário de Burnout de Maslach para estudantes apresentou adequada confiabilidade e validade, mas sua estrutura fatorial não foi invariante entre os países, apontando ausência de estabilidade transcultural.
Resumo:
This study investigates the possible differences between actors' and nonactors' vocal projection strategies using acoustic and perceptual analyses. A total of 11 male actors and 10 male nonactors volunteered as subjects, reading an extended text sample in habitual, moderate, and loud levels. The samples were analyzed for sound pressure level (SPL), alpha ratio (difference between the average SPL of the 1-5 kHz region and the average SPL of the 50 Hz-1 kHz region), fundamental frequency (F0), and long-term average spectrum (LTAS). Through LTAS, the mean frequency of the first formant (171) range, the mean frequency of the actor's formant, the level differences between the F1 frequency region and the F0 region (L1-L0), and the level differences between the strongest peak at 0-1 kHz and that at 3-4 kHz were measured. Eight voice specialists evaluated perceptually the degree of projection, loudness, and tension in the samples. The actors had a greater alpha ratio, stronger level of the actor's formant range, and a higher degree of perceived projection and loudness in all loudness levels. SPL, however, did not differ significantly between the actors and nonactors, and no differences were found in the mean formant frequencies ranges. The alpha ratio and the relative level of the actor's formant range seemed to be related to the degree of perceived loudness. From the physiological point of view, a more favorable glottal setting' providing a higher glottal closing speed, may be characteristic of these actors' projected voices. So, the projected voices, in this group of actors, were more related to the glottic source than to the resonance of the vocal tract.