941 resultados para Log-linear model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Often, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test, and also to a test obtained from a modified profile likelihood function. Our results generalize those in [Zucker, D.M., Lieberman, O., Manor, O., 2000. Improved small sample inference in the mixed linear model: Bartlett correction and adjusted likelihood. Journal of the Royal Statistical Society B, 62,827-838] by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report simulation results which show that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presented and discussed. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Birnbaum-Saunders models have largely been applied in material fatigue studies and reliability analyses to relate the total time until failure with some type of cumulative damage. In many problems related to the medical field, such as chronic cardiac diseases and different types of cancer, a cumulative damage caused by several risk factors might cause some degradation that leads to a fatigue process. In these cases, BS models can be suitable for describing the propagation lifetime. However, since the cumulative damage is assumed to be normally distributed in the BS distribution, the parameter estimates from this model can be sensitive to outlying observations. In order to attenuate this influence, we present in this paper BS models, in which a Student-t distribution is assumed to explain the cumulative damage. In particular, we show that the maximum likelihood estimates of the Student-t log-BS models attribute smaller weights to outlying observations, which produce robust parameter estimates. Also, some inferential results are presented. In addition, based on local influence and deviance component and martingale-type residuals, a diagnostics analysis is derived. Finally, a motivating example from the medical field is analyzed using log-BS regression models. Since the parameter estimates appear to be very sensitive to outlying and influential observations, the Student-t log-BS regression model should attenuate such influences. The model checking methodologies developed in this paper are used to compare the fitted models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Detecting both the majors genes that control the phenotypic mean and those controlling phenotypic variance has been raised in quantitative trait loci analysis. In order to mapping both kinds of genes, we applied the idea of the classic Haley-Knott regression to double generalized linear models. We performed both kinds of quantitative trait loci detection for a Red Jungle Fowl x White Leghorn F2 intercross using double generalized linear models. It is shown that double generalized linear model is a proper and efficient approach for localizing variance-controlling genes. We compared two models with or without fixed sex effect and prefer including the sex effect in order to reduce the residual variances. We found that different genes might take effect on the body weight at different time as the chicken grows.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Consumption is an important macroeconomic aggregate, being about 70% of GNP. Finding sub-optimal behavior in consumption decisions casts a serious doubt on whether optimizing behavior is applicable on an economy-wide scale, which, in turn, challenge whether it is applicable at all. This paper has several contributions to the literature on consumption optimality. First, we provide a new result on the basic rule-of-thumb regression, showing that it is observational equivalent to the one obtained in a well known optimizing real-business-cycle model. Second, for rule-of-thumb tests based on the Asset-Pricing Equation, we show that the omission of the higher-order term in the log-linear approximation yields inconsistent estimates when lagged observables are used as instruments. However, these are exactly the instruments that have been traditionally used in this literature. Third, we show that nonlinear estimation of a system of N Asset-Pricing Equations can be done efficiently even if the number of asset returns (N) is high vis-a-vis the number of time-series observations (T). We argue that efficiency can be restored by aggregating returns into a single measure that fully captures intertemporal substitution. Indeed, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Pricing Equation, since the latter is a linear function of individual returns. This forms the basis of a new test of rule-of-thumb behavior, which can be viewed as testing for the importance of rule-of-thumb consumers when the optimizing agent holds an equally-weighted portfolio or a weighted portfolio of traded assets. Using our setup, we find no signs of either rule-of-thumb behavior for U.S. consumers or of habit-formation in consumption decisions in econometric tests. Indeed, we show that the simple representative agent model with a CRRA utility is able to explain the time series data on consumption and aggregate returns. There, the intertemporal discount factor is significant and ranges from 0.956 to 0.969 while the relative risk-aversion coefficient is precisely estimated ranging from 0.829 to 1.126. There is no evidence of rejection in over-identifying-restriction tests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The goal of this paper is twofold. First, using five of the most actively traded stocks in the Brazilian financial market, this paper shows that the normality assumption commonly used in the risk management area to describe the distributions of returns standardized by volatilities is not compatible with volatilities estimated by EWMA or GARCH models. In sharp contrast, when the information contained in high frequency data is used to construct the realized volatilies measures, we attain the normality of the standardized returns, giving promise of improvements in Value at Risk statistics. We also describe the distributions of volatilities of the Brazilian stocks, showing that the distributions of volatilities are nearly lognormal. Second, we estimate a simple linear model to the log of realized volatilities that differs from the ones in other studies. The main difference is that we do not find evidence of long memory. The estimated model is compared with commonly used alternatives in an out-of-sample experiment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider multistage stochastic linear optimization problems combining joint dynamic probabilistic constraints with hard constraints. We develop a method for projecting decision rules onto hard constraints of wait-and-see type. We establish the relation between the original (in nite dimensional) problem and approximating problems working with projections from di erent subclasses of decision policies. Considering the subclass of linear decision rules and a generalized linear model for the underlying stochastic process with noises that are Gaussian or truncated Gaussian, we show that the value and gradient of the objective and constraint functions of the approximating problems can be computed analytically.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O objetivo deste trabalho foi comparar as estimativas de parâmetros genéticos obtidas em análises bayesianas uni-característica e bi-característica, em modelo animal linear e de limiar, considerando-se as características categóricas morfológicas de bovinos da raça Nelore. Os dados de musculosidade, estrutura física e conformação foram obtidos entre 2000 e 2005, em 3.864 animais de 13 fazendas participantes do Programa Nelore Brasil. Foram realizadas análises bayesianas uni e bi-características, em modelos de limiar e linear. de modo geral, os modelos de limiar e linear foram eficientes na estimação dos parâmetros genéticos para escores visuais em análises bayesianas uni-características. Nas análises bi-características, observou-se que: com utilização de dados contínuos e categóricos, o modelo de limiar proporcionou estimativas de correlação genética de maior magnitude do que aquelas do modelo linear; e com o uso de dados categóricos, as estimativas de herdabilidade foram semelhantes. A vantagem do modelo linear foi o menor tempo gasto no processamento das análises. Na avaliação genética de animais para escores visuais, o uso do modelo de limiar ou linear não influenciou a classificação dos animais, quanto aos valores genéticos preditos, o que indica que ambos os modelos podem ser utilizados em programas de melhoramento genético.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objectives of this study were to compare the goodness of fit of four non-linear growth models, i.e. Brody, Gompertz, Logistic and Von Bertalanffy, in West African Dwarf (WAD) sheep. A total of 5274 monthly weight records from birth up to 180 days of age from 889 lambs, collected during 2001 to 2004 in Betecoucou breeding farm in Benin were used. In the preliminary analysis, the General Linear Model Procedure of the Statistical Analysis Systems Institute was applied to the dataset to identify the significant effects of the sex of lamb (male and female), type of birth (single and twin), season of birth (rainy season and dry season), parity of dam (1, 2 and 3) and year of birth (2001, 2002, 2003 and 2004) on the observed birth weight and monthly weight up to 6 months of age. The models parameters (A, B and k), coefficient of determination (112), mean square error (MSE) were calculated using language of technical computing package Matlab(R), 2006. The mean values of A, B and k were substituted into each model to calculate the corresponding Akaike's Information Criterion (AIC). Among the four growth functions, the Brody model has been selected for its accuracy of fit according to the higher R(2), lower MSE and A/C Finally, the parameters A, B and k were adjusted in Matlab(R) 2006 for the sex of lamb, year of birth, season of birth, birth type and the parity of ewe, providing a specific slope of the Brody growth curve. The results of this study suggest that Brody model can be useful for WAD sheep breeding in Betecoucou farm conditions through growth monitoring.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work presents a modelling and identification method for a wheeled mobile robot, including the actuator dynamics. Instead of the classic modelling approach, where the robot position coordinates (x,y) are utilized as state variables (resulting in a non linear model), the proposed discrete model is based on the travelled distance increment Delta_l. Thus, the resulting model is linear and time invariant and it can be identified through classical methods such as Recursive Least Mean Squares. This approach has a problem: Delta_l can not be directly measured. In this paper, this problem is solved using an estimate of Delta_l based on a second order polynomial approximation. Experimental data were colected and the proposed method was used to identify the model of a real robot

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Slugging is a well-known slugging phenomenon in multiphase flow, which may cause problems such as vibration in pipeline and high liquid level in the separator. It can be classified according to the place of its occurrence. The most severe, known as slugging in the riser, occurs in the vertical pipe which feeds the platform. Also known as severe slugging, it is capable of causing severe pressure fluctuations in the flow of the process, excessive vibration, flooding in separator tanks, limited production, nonscheduled stop of production, among other negative aspects that motivated the production of this work . A feasible solution to deal with this problem would be to design an effective method for the removal or reduction of the system, a controller. According to the literature, a conventional PID controller did not produce good results due to the high degree of nonlinearity of the process, fueling the development of advanced control techniques. Among these, the model predictive controller (MPC), where the control action results from the solution of an optimization problem, it is robust, can incorporate physical and /or security constraints. The objective of this work is to apply a non-conventional non-linear model predictive control technique to severe slugging, where the amount of liquid mass in the riser is controlled by the production valve and, indirectly, the oscillation of flow and pressure is suppressed, while looking for environmental and economic benefits. The proposed strategy is based on the use of the model linear approximations and repeatedly solving of a quadratic optimization problem, providing solutions that improve at each iteration. In the event where the convergence of this algorithm is satisfied, the predicted values of the process variables are the same as to those obtained by the original nonlinear model, ensuring that the constraints are satisfied for them along the prediction horizon. A mathematical model recently published in the literature, capable of representing characteristics of severe slugging in a real oil well, is used both for simulation and for the project of the proposed controller, whose performance is compared to a linear MPC

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)