974 resultados para variance-ratio tests


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Samples of 11,000 King George whiting (Sillaginodes punctata) from the South Australian commercial and recreational catch, supplemented by research samples, were aged from otoliths. Samples were analyzed from three coastal regions and by sex. Most sampling was undertaken at fish processing plants, from which only fish longer than the legal minimum length were obtained. A left-truncated normal distribution of lengths at monthly age was therefore employed as model likelihood. Mean length-at-monthly-age was described by a generalized von Bertalanffy formula with sinusoidal seasonality. Likelihood standard deviation was modeled to vary allometrically with mean length. A range of related formulas (with 6 to 8 parameters) for seasonal mean length at age were compared. In addition to likelihood ratio tests of relative fit, model selection criteria were a minimum occurrence of high uncertainties (>20% SE), of high correlations (>0.9, >0.95, and >0.99) and of parameter estimates at their biological limits, and we sought a model with a minimum number of parameters. A generalized von Bertalanffy formula with t0 fixed at 0 was chosen. The truncated likelihood alleviated the overestimation bias of mean length at age that would otherwise accrue from catch samples being restricted to legal sizes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Recent studies suggest that there is a learning curve for metal-on-metal hip resurfacing. The purpose of this study was to assess whether implant positioning changed with surgeon experience and whether positioning and component sizing were associated with implant longevity. METHODS: We evaluated the first 361 consecutive hip resurfacings performed by a single surgeon, which had a mean follow-up of 59 months (range, 28 to 87 months). Pre and post-operative radiographs were assessed to determine the inclination of the acetabular component, as well as the sagittal and coronal femoral stem-neck angles. Changes in the precision of component placement were determined by assessing changes in the standard deviation of each measurement using variance ratio and linear regression analysis. Additionally, the cup and stem-shaft angles as well as component sizes were compared between the 31 hips that failed over the follow-up period and the surviving components to assess for any differences that might have been associated with an increased risk for failure. RESULTS: Surgeon experience was correlated with improved precision of the antero-posterior and lateral positioning of the femoral component. However, femoral and acetabular radiographic implant positioning angles were not different between the surviving hips and failures. The failures had smaller mean femoral component diameters as compared to the non-failure group (44 versus 47 millimeters). CONCLUSIONS: These results suggest that there may be differences in implant positioning in early versus late learning curve procedures, but that in the absence of recognized risk factors such as intra-operative notching of the femoral neck and cup inclination in excess of 50 degrees, component positioning does not appear to be associated with failure. Nevertheless, surgeons should exercise caution in operating patients with small femoral necks, especially when they are early in the learning curve.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Margin policy is used by regulators for the purpose of inhibiting exceSSIve volatility and stabilizing the stock market in the long run. The effect of this policy on the stock market is widely tested empirically. However, most prior studies are limited in the sense that they investigate the margin requirement for the overall stock market rather than for individual stocks, and the time periods examined are confined to the pre-1974 period as no change in the margin requirement occurred post-1974 in the U.S. This thesis intends to address the above limitations by providing a direct examination of the effect of margin requirement on return, volume, and volatility of individual companies and by using more recent data in the Canadian stock market. Using the methodologies of variance ratio test and event study with conditional volatility (EGARCH) model, we find no convincing evidence that change in margin requirement affects subsequent stock return volatility. We also find similar results for returns and trading volume. These empirical findings lead us to conclude that the use of margin policy by regulators fails to achieve the goal of inhibiting speculating activities and stabilizing volatility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper tests the predictions of the Barro-Gordon model using US data on inflation and unemployment. To that end, it constructs a general game-theoretical model with asymmetric preferences that nests the Barro-Gordon model and a version of Cukierman’s model as special cases. Likelihood Ratio tests indicate that the restriction imposed by the Barro-Gordon model is rejected by the data but the one imposed by the version of Cukierman’s model is not. Reduced-form estimates are consistent with the view that the Federal Reserve weights more heavily positive than negative unemployment deviations from the expected natural rate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Developments in the statistical analysis of compositional data over the last two decades have made possible a much deeper exploration of the nature of variability, and the possible processes associated with compositional data sets from many disciplines. In this paper we concentrate on geochemical data sets. First we explain how hypotheses of compositional variability may be formulated within the natural sample space, the unit simplex, including useful hypotheses of subcompositional discrimination and specific perturbational change. Then we develop through standard methodology, such as generalised likelihood ratio tests, statistical tools to allow the systematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require special construction. We comment on the use of graphical methods in compositional data analysis and on the ordination of specimens. The recent development of the concept of compositional processes is then explained together with the necessary tools for a staying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland. Finally we point out a number of unresolved problems in the statistical analysis of compositional processes

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, a sharp divergence of London Stock Exchange equity prices from dividends has been noted. In this paper, we examine whether this divergence can be explained by reference to the existence of a speculative bubble. Three different empirical methodologies are used: variance bounds tests, bubble specification tests, and cointegration tests based on both ex post and ex ante data. We find that, stock prices diverged significantly from their fundamental values during the late 1990's, and that this divergence has all the characteristics of a bubble.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gardner's popular model of perfect competition in the marketing sector is extended to a conjectural-variations oligopoly with endogenous entry. Revising Gardner's comparative statics on the "farm-retail price ratio," tests of hypotheses about food industry conduct are derived. Using data from a recent article by Wohlgenant, which employs Gardner's framework, tests are made of the validity of his maintained hypothesis-that the food industries are perfectly competitive. No evidence is found of departures from competition in the output markets of the food industries of eight commodity groups: (a) beef and veal, (b) pork, (c) poultry, (d) eggs, (e) dairy, (f) processed fruits and vegetables, (g) fresh fruit, and (h) fresh vegetables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Speculative bubbles are generated when investors include the expectation of the future price in their information set. Under these conditions, the actual market price of the security, that is set according to demand and supply, will be a function of the future price and vice versa. In the presence of speculative bubbles, positive expected bubble returns will lead to increased demand and will thus force prices to diverge from their fundamental value. This paper investigates whether the prices of UK equity-traded property stocks over the past 15 years contain evidence of a speculative bubble. The analysis draws upon the methodologies adopted in various studies examining price bubbles in the general stock market. Fundamental values are generated using two models: the dividend discount and the Gordon growth. Variance bounds tests are then applied to test for bubbles in the UK property asset prices. Finally, cointegration analysis is conducted to provide further evidence on the presence of bubbles. Evidence of the existence of bubbles is found, although these appear to be transitory and concentrated in the mid-to-late 1990s.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Influences of inbreeding on daily milk yield (DMY), age at first calving (AFC), and calving intervals (CI) were determined on a highly inbred zebu dairy subpopulation of the Guzerat breed. Variance components were estimated using animal models in single-trait analyses. Two approaches were employed to estimate inbreeding depression: using individual increase in inbreeding coefficients or using inbreeding coefficients as possible covariates included in the statistical models. The pedigree file included 9,915 animals, of which 9,055 were inbred, with an average inbreeding coefficient of 15.2%. The maximum inbreeding coefficient observed was 49.45%, and the average inbreeding for the females still in the herd during the analysis was 26.42%. Heritability estimates were 0.27 for DMY and 0.38 for AFC. The genetic variance ratio estimated with the random regression model for CI ranged around 0.10. Increased inbreeding caused poorer performance in DMY, AFC, and CI. However, some of the cows with the highest milk yield were among the highly inbred animals in this subpopulation. Individual increase in inbreeding used as a covariate in the statistical models accounted for inbreeding depression while avoiding overestimation that may result when fitting inbreeding coefficients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1-39.], and (ii) an approximation to the one proposed by Barndorff-Nielsen [Barndorff-Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343-365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33-53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655-661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff-Nielsen`s adjustment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.