941 resultados para Linear coregionalization model
Resumo:
In this paper, we present a Bayesian approach for estimation in the skew-normal calibration model, as well as the conditional posterior distributions which are useful for implementing the Gibbs sampler. Data transformation is thus avoided by using the methodology proposed. Model fitting is implemented by proposing the asymmetric deviance information criterion, ADIC, a modification of the ordinary DIC. We also report an application of the model studied by using a real data set, related to the relationship between the resistance and the elasticity of a sample of concrete beams. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
This paper derives the second-order biases Of maximum likelihood estimates from a multivariate normal model where the mean vector and the covariance matrix have parameters in common. We show that the second order bias can always be obtained by means of ordinary weighted least-squares regressions. We conduct simulation studies which indicate that the bias correction scheme yields nearly unbiased estimators. (C) 2009 Elsevier B.V. All rights reserved.
The shoving model for the glass-former LiCl center dot 6H(2)O: A molecular dynamics simulation study
Resumo:
Molecular dynamics (MD) simulations of LiCl center dot 6H(2)O Showed that the diffusion coefficient D, and also I lie structural relaxation time
Resumo:
In this paper, we study the influence of the National Telecom Business Volume by the data in 2008 that have been published in China Statistical Yearbook of Statistics. We illustrate the procedure of modeling “National Telecom Business Volume” on the following eight variables, GDP, Consumption Levels, Retail Sales of Social Consumer Goods Total Renovation Investment, the Local Telephone Exchange Capacity, Mobile Telephone Exchange Capacity, Mobile Phone End Users, and the Local Telephone End Users. The testing of heteroscedasticity and multicollinearity for model evaluation is included. We also consider AIC and BIC criterion to select independent variables, and conclude the result of the factors which are the optimal regression model for the amount of telecommunications business and the relation between independent variables and dependent variable. Based on the final results, we propose several recommendations about how to improve telecommunication services and promote the economic development.
Resumo:
Detecting both the majors genes that control the phenotypic mean and those controlling phenotypic variance has been raised in quantitative trait loci analysis. In order to mapping both kinds of genes, we applied the idea of the classic Haley-Knott regression to double generalized linear models. We performed both kinds of quantitative trait loci detection for a Red Jungle Fowl x White Leghorn F2 intercross using double generalized linear models. It is shown that double generalized linear model is a proper and efficient approach for localizing variance-controlling genes. We compared two models with or without fixed sex effect and prefer including the sex effect in order to reduce the residual variances. We found that different genes might take effect on the body weight at different time as the chicken grows.
Resumo:
This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.
Resumo:
Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Resumo:
We present the hglm package for fitting hierarchical generalized linear models. It can be used for linear mixed models and generalized linear mixed models with random effects for a variety of links and a variety of distributions for both the outcomes and the random effects. Fixed effects can also be fitted in the dispersion part of the model.
Resumo:
This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.
Resumo:
This paper contributes to the debate on whether the Brazilian public debt is sustainable or not in the long run by considering threshold effects on the Brazilian Budget Deficit. Using data from 1947 to 1999 and a threshold autoregressive model, we find evidence of delays in fiscal stabilization. As suggested in Alesina (1991), delayed stabilizations reflect the existence of political constraints blocking deficit cuts, which are relaxed only when the budget deficit reaches a sufficiently high level, deemed to be unsustainable. In particular, our results suggest that, in the absence of seignorage, only when the increase in the budget deficit reaches 1.74% of the GDP will fiscal authorities intervene to reduce the deficit. If seignorage is allowed, the threshold increases to 2.2%, suggesting that seignorage makes government more tolerant to fiscal imbalances.
Resumo:
Neste trabalho é discutido o impacto causado pelos parâmetros de processo com comportamento estocástico em um modelo de otimização, aplicado ao planejamento mineiro. Com base em um estudo de caso real, construiu-se um modelo matemático representando o processo produtivo associado à mineração, beneficiamento e comercialização de carvão mineral. Este modelo foi otimizado com a técnica de programação linear, sendo a solução ótima perturbada pelo comportamento estocástico de um dos principais parâmetros envolvidos no processo produtivo. A análise dos resultados permitiu avaliar o risco associado à decisão ótima, sendo com isto proposta uma metodologia para avaliação do risco operacional.
Resumo:
This paper investigates economic growth’s pattern of variation across and within countries using a Time-Varying Transition Matrix Markov-Switching Approach. The model developed follows the approach of Pritchett (2003) and explains the dynamics of growth based on a collection of different states, each of which has a sub-model and a growth pattern, by which countries oscillate over time. The transition matrix among the different states varies over time, depending on the conditioning variables of each country, with a linear dynamic for each state. We develop a generalization of the Diebold’s EM Algorithm and estimate an example model in a panel with a transition matrix conditioned on the quality of the institutions and the level of investment. We found three states of growth: stable growth, miraculous growth, and stagnation. The results show that the quality of the institutions is an important determinant of long-term growth, whereas the level of investment has varying roles in that it contributes positively in countries with high-quality institutions but is of little relevance in countries with medium- or poor-quality institutions.
Resumo:
This paper has two original contributions. First, we show that the present value model (PVM hereafter), which has a wide application in macroeconomics and fi nance, entails common cyclical feature restrictions in the dynamics of the vector error-correction representation (Vahid and Engle, 1993); something that has been already investigated in that VECM context by Johansen and Swensen (1999, 2011) but has not been discussed before with this new emphasis. We also provide the present value reduced rank constraints to be tested within the log-linear model. Our second contribution relates to forecasting time series that are subject to those long and short-run reduced rank restrictions. The reason why appropriate common cyclical feature restrictions might improve forecasting is because it finds natural exclusion restrictions preventing the estimation of useless parameters, which would otherwise contribute to the increase of forecast variance with no expected reduction in bias. We applied the techniques discussed in this paper to data known to be subject to present value restrictions, i.e. the online series maintained and up-dated by Shiller. We focus on three different data sets. The fi rst includes the levels of interest rates with long and short maturities, the second includes the level of real price and dividend for the S&P composite index, and the third includes the logarithmic transformation of prices and dividends. Our exhaustive investigation of several different multivariate models reveals that better forecasts can be achieved when restrictions are applied to them. Moreover, imposing short-run restrictions produce forecast winners 70% of the time for target variables of PVMs and 63.33% of the time when all variables in the system are considered.
Resumo:
We study the optimal “inflation tax” in an environment with heterogeneous agents and non-linear income taxes. We first derive the general conditions needed for the optimality of the Friedman rule in this setup. These general conditions are distinct in nature and more easily interpretable than those obtained in the literature with a representative agent and linear taxation. We then study two standard monetary specifications and derive their implications for the optimality of the Friedman rule. For the shopping-time model the Friedman rule is optimal with essentially no restrictions on preferences or transaction technologies. For the cash-credit model the Friedman rule is optimal if preferences are separable between the consumption goods and leisure, or if leisure shifts consumption towards the credit good. We also study a generalized model which nests both models as special cases.
Resumo:
We develop an affine jump diffusion (AJD) model with the jump-risk premium being determined by both idiosyncratic and systematic sources of risk. While we maintain the classical affine setting of the model, we add a finite set of new state variables that affect the paths of the primitive, under both the actual and the risk-neutral measure, by being related to the primitive's jump process. Those new variables are assumed to be commom to all the primitives. We present simulations to ensure that the model generates the volatility smile and compute the "discounted conditional characteristic function'' transform that permits the pricing of a wide range of derivatives.