806 resultados para stock uncertainty
Resumo:
This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.
Resumo:
Stock market wealth effects on the level of consumption in the United States economy have been constantly debated; there is evidence for arguments for and against its prominence and its symmetry. This paper seeks to investigate the strength of its negative effect by creating models to analyze unexpected shocks to the Standard and Poor's 500 index. First, a transmission mechanism between the stock market and GDP is established through the use of second-order vector autoregressive models. Following which, theory from the life cycle model and adaptations of previous researchers' models are used to create a structural model. This paper finds that stock market wealth effects are small, but important to consider, especially if markets are overpriced; this claim is corroborated by evidence from simulation of 'alternative scenarios' and the historical experiences of 1987 and 2001.
Resumo:
This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.
Resumo:
A procedure for characterizing global uncertainty of a rainfall-runoff simulation model based on using grey numbers is presented. By using the grey numbers technique the uncertainty is characterized by an interval; once the parameters of the rainfall-runoff model have been properly defined as grey numbers, by using the grey mathematics and functions it is possible to obtain simulated discharges in the form of grey numbers whose envelope defines a band which represents the vagueness/uncertainty associated with the simulated variable. The grey numbers representing the model parameters are estimated in such a way that the band obtained from the envelope of simulated grey discharges includes an assigned percentage of observed discharge values and is at the same time as narrow as possible. The approach is applied to a real case study highlighting that a rigorous application of the procedure for direct simulation through the rainfall-runoff model with grey parameters involves long computational times. However, these times can be significantly reduced using a simplified computing procedure with minimal approximations in the quantification of the grey numbers representing the simulated discharges. Relying on this simplified procedure, the conceptual rainfall-runoff grey model is thus calibrated and the uncertainty bands obtained both downstream of the calibration process and downstream of the validation process are compared with those obtained by using a well-established approach, like the GLUE approach, for characterizing uncertainty. The results of the comparison show that the proposed approach may represent a valid tool for characterizing the global uncertainty associable with the output of a rainfall-runoff simulation model.
Resumo:
Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.
Resumo:
Lucas (1987) has shown the surprising result that the welfare cost of business cycles is quite small. Using standard assumptions on preferences and a fully-áedged econometric model we computed the welfare costs of macroeconomic uncertainty for the post-WWII era using the multivariate Beveridge-Nelson decomposition for trends and cycles, which considers not only business-cycle uncertainty but also uncertainty from the stochastic trend in consumption. The post-WWII period is relatively quiet, with the welfare costs of uncertainty being about 0:9% of per-capita consumption. Although changing the decomposition method changed substantially initial results, the welfare cost of uncertainty is qualitatively small in the post-WWII era - about $175.00 a year per-capita in the U.S. We also computed the marginal welfare cost of macroeconomic uncertainty using this same technique. It is about twice as large as the welfare cost ñ$350.00 a year per-capita.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the period 1976-1992. We also test a conditional APT modeI by using the difference between the 3-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from individual securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be important for the appropriate pricing of the portfolios.
Resumo:
Consider the demand for a good whose consumption be chosen prior to the resolution of uncertainty regarding income. How do changes in the distribution of income affect the demand for this good? In this paper we show that normality, is sufficient to guarantee that consumption increases of the Radon-Nikodym derivative of the new distribution with respect to the old is non-decreasing in the whole domain. However, if only first order stochastic dominance is assumed more structure must be imposed on preferences to guanantee the validity of the result. Finally a converse of the first result also obtains. If the change in measure is characterized by non-decreasing Radon-Nicodyn derivative, consumption of such a good will always increase if and only if the good is normal.
Resumo:
In this paper we apply the theory of declsion making with expected utility and non-additive priors to the choice of optimal portfolio. This theory describes the behavior of a rational agent who i5 averse to pure 'uncertainty' (as well as, possibly, to 'risk'). We study the agent's optimal allocation of wealth between a safe and an uncertain asset. We show that there is a range of prices at which the agent neither buys not sells short the uncertain asset. In contrast the standard theory of expected utility predicts that there is exactly one such price. We also provide a definition of an increase in uncertainty aversion and show that it causes the range of prices to increase.
Resumo:
With standard assumptions on preferences and a fully-fledged econometric model we computed the welfare costs of macroeconomic uncertainty for post-war U.S. using the BeveridgeNelson decomposition. Welfare costs are about 0.9% per-capita consumption ($175.00) and marginal welfare costs are about twice as large.
Resumo:
This artic/e applies a theorem of Nash equilibrium under uncertainty (Dow & Werlang, 1994) to the classic Coumot model of oligopolistic competition. It shows, in particular, how one can map all Coumot equilibrium (which includes the monopoly and the null solutions) with only a function of uncertainty aversion coefficients of producers. The effect of variations in these parameters over the equilibrium quantities are studied, also assuming exogenous increases in the number of matching firms in the game. The Cournot solutions under uncertainty are compared with the monopolistic one. It shows principally that there is an uncertainty aversion level in the industry such that every aversion coefficient beyond it induces firms to produce an aggregate output smaller than the monopoly output. At the end of the artic/e equilibrium solutions are specialized for Linear Demand and for Coumot duopoly. Equilibrium analysis in the symmetric case allows to identify the uncertainty aversion coefficient for the whole industry as a proportional lack of information cost which would be conveyed by market price in the perfect competition case (Lerner Index).
Resumo:
We define Nash equilibrium for two-person normal form games in the presence of uncertainty, in the sense of Knight(1921). We use the fonna1iution of uncertainty due to Schmeidler and Gilboa. We show tbat there exist Nash equilibria for any degree of uncertainty, as measured by the uncertainty aversion (Dow anel Wer1ang(l992a». We show by example tbat prudent behaviour (maxmin) can be obtained as an outcome even when it is not rationaliuble in the usual sense. Next, we break down backward industion in the twice repeated prisoner's dilemma. We link these results with those on cooperation in the finitely repeated prisoner's dilemma obtained by Kreps-Milgrom-Roberts-Wdson(1982), and withthe 1iterature on epistemological conditions underlying Nash equilibrium. The knowledge notion implicit in this mode1 of equilibrium does not display logical omniscience.
Resumo:
We present two alternative definitions of Nash equilibrium for two person games in the presence af uncertainty, in the sense of Knight. We use the formalization of uncertainty due to Schmeidler and Gilboa. We show that, with one of the definitions, prudent behaviour (maxmin) can be obtained as an outcome even when it is not rationalizable in the usual sense. Most striking is that with the Same definition we break down backward induction in the twice repeated prisoner's dilemma. We also link these results with the Kreps-Milgrom-Roberts-Wilson explanation of cooperation in the finitely repeated prisoner's dilemma.
Resumo:
Este Trabalho Discute as Perspectivas da Regulação Econômica no Brasil. para Tanto, Primeiramente Apresenta-Se a Evolução Histórica da Regulação no País, Discutindo as Principais Questões Relacionadas Às Agências Reguladoras Federais. em Segundo Lugar, os Marcos Regulatórios de Cinco Diferentes Setores (Telecomunicações, Eletricidade, Saneamento Básico, Petróleo e Gás Natural) são Analisados. em Terceiro Lugar, a Questão do Financiamento de Investimentos em Infra-Estrutura é Tratada, Enfatizando o Papel das Parcerias Público-Privadas (Ppps). uma Seção Final Cont~Em um Possível Agenda para a Regulação no Brasil