896 resultados para Uncertainty bias


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Random effect models have been widely applied in many fields of research. However, models with uncertain design matrices for random effects have been little investigated before. In some applications with such problems, an expectation method has been used for simplicity. This method does not include the extra information of uncertainty in the design matrix is not included. The closed solution for this problem is generally difficult to attain. We therefore propose an two-step algorithm for estimating the parameters, especially the variance components in the model. The implementation is based on Monte Carlo approximation and a Newton-Raphson-based EM algorithm. As an example, a simulated genetics dataset was analyzed. The results showed that the proportion of the total variance explained by the random effects was accurately estimated, which was highly underestimated by the expectation method. By introducing heuristic search and optimization methods, the algorithm can possibly be developed to infer the 'model-based' best design matrix and the corresponding best estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider methods for estimating causal effects of treatment in the situation where the individuals in the treatment and the control group are self selected, i.e., the selection mechanism is not randomized. In this case, simple comparison of treated and control outcomes will not generally yield valid estimates of casual effects. The propensity score method is frequently used for the evaluation of treatment effect. However, this method is based onsome strong assumptions, which are not directly testable. In this paper, we present an alternative modeling approachto draw causal inference by using share random-effect model and the computational algorithm to draw likelihood based inference with such a model. With small numerical studies and a real data analysis, we show that our approach gives not only more efficient estimates but it is also less sensitive to model misspecifications, which we consider, than the existing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A procedure for characterizing global uncertainty of a rainfall-runoff simulation model based on using grey numbers is presented. By using the grey numbers technique the uncertainty is characterized by an interval; once the parameters of the rainfall-runoff model have been properly defined as grey numbers, by using the grey mathematics and functions it is possible to obtain simulated discharges in the form of grey numbers whose envelope defines a band which represents the vagueness/uncertainty associated with the simulated variable. The grey numbers representing the model parameters are estimated in such a way that the band obtained from the envelope of simulated grey discharges includes an assigned percentage of observed discharge values and is at the same time as narrow as possible. The approach is applied to a real case study highlighting that a rigorous application of the procedure for direct simulation through the rainfall-runoff model with grey parameters involves long computational times. However, these times can be significantly reduced using a simplified computing procedure with minimal approximations in the quantification of the grey numbers representing the simulated discharges. Relying on this simplified procedure, the conceptual rainfall-runoff grey model is thus calibrated and the uncertainty bands obtained both downstream of the calibration process and downstream of the validation process are compared with those obtained by using a well-established approach, like the GLUE approach, for characterizing uncertainty. The results of the comparison show that the proposed approach may represent a valid tool for characterizing the global uncertainty associable with the output of a rainfall-runoff simulation model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lucas (1987) has shown the surprising result that the welfare cost of business cycles is quite small. Using standard assumptions on preferences and a fully-áedged econometric model we computed the welfare costs of macroeconomic uncertainty for the post-WWII era using the multivariate Beveridge-Nelson decomposition for trends and cycles, which considers not only business-cycle uncertainty but also uncertainty from the stochastic trend in consumption. The post-WWII period is relatively quiet, with the welfare costs of uncertainty being about 0:9% of per-capita consumption. Although changing the decomposition method changed substantially initial results, the welfare cost of uncertainty is qualitatively small in the post-WWII era - about $175.00 a year per-capita in the U.S. We also computed the marginal welfare cost of macroeconomic uncertainty using this same technique. It is about twice as large as the welfare cost ñ$350.00 a year per-capita.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the demand for a good whose consumption be chosen prior to the resolution of uncertainty regarding income. How do changes in the distribution of income affect the demand for this good? In this paper we show that normality, is sufficient to guarantee that consumption increases of the Radon-Nikodym derivative of the new distribution with respect to the old is non-decreasing in the whole domain. However, if only first order stochastic dominance is assumed more structure must be imposed on preferences to guanantee the validity of the result. Finally a converse of the first result also obtains. If the change in measure is characterized by non-decreasing Radon-Nicodyn derivative, consumption of such a good will always increase if and only if the good is normal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using data from the United States, Japan, Germany , United Kingdom and France, Sims (1992) found that positive innovations to shortterm interest rates led to sharp, persistent increases in the price level. The result was conÖrmed by other authors and, as a consequence of its non-expectable nature, was given the name "price puzzle" by Eichenbaum (1992). In this paper I investigate the existence of a price puzzle in Brazil using the same type of estimation and benchmark identiÖcation scheme employed by Christiano et al. (2000). In a methodological improvement over these studies, I qualify the results with the construction of bias-corrected bootstrap conÖdence intervals. Even though the data does show the existence of a statistically signiÖcant price puzzle in Brazil, it lasts for only one quarter and is quantitatively immaterial

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we apply the theory of declsion making with expected utility and non-additive priors to the choice of optimal portfolio. This theory describes the behavior of a rational agent who i5 averse to pure 'uncertainty' (as well as, possibly, to 'risk'). We study the agent's optimal allocation of wealth between a safe and an uncertain asset. We show that there is a range of prices at which the agent neither buys not sells short the uncertain asset. In contrast the standard theory of expected utility predicts that there is exactly one such price. We also provide a definition of an increase in uncertainty aversion and show that it causes the range of prices to increase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article explains why the existence of state owned financial institutions makes it more difficult for a country to balance its budget. We show that states can use their financiaI institutions to transfer their deficits to the federal govemment. As a result, there is a bias towards Iarge deficits and high inflation rates. Our model also predicts that state owned financiaI institutions should underperform the market, mainly because they concentrate their portfolios on non-performing loans to their own shareholders, that is, the states. Brazil and Argentina are two countries with a history of high inflation that confirm our predictions .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With standard assumptions on preferences and a fully-fledged econometric model we computed the welfare costs of macroeconomic uncertainty for post-war U.S. using the BeveridgeNelson decomposition. Welfare costs are about 0.9% per-capita consumption ($175.00) and marginal welfare costs are about twice as large.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This artic/e applies a theorem of Nash equilibrium under uncertainty (Dow & Werlang, 1994) to the classic Coumot model of oligopolistic competition. It shows, in particular, how one can map all Coumot equilibrium (which includes the monopoly and the null solutions) with only a function of uncertainty aversion coefficients of producers. The effect of variations in these parameters over the equilibrium quantities are studied, also assuming exogenous increases in the number of matching firms in the game. The Cournot solutions under uncertainty are compared with the monopolistic one. It shows principally that there is an uncertainty aversion level in the industry such that every aversion coefficient beyond it induces firms to produce an aggregate output smaller than the monopoly output. At the end of the artic/e equilibrium solutions are specialized for Linear Demand and for Coumot duopoly. Equilibrium analysis in the symmetric case allows to identify the uncertainty aversion coefficient for the whole industry as a proportional lack of information cost which would be conveyed by market price in the perfect competition case (Lerner Index).