11 resultados para TRUE COSOLVENCY
em Scottish Institute for Research in Economics (SIRE) (SIRE), United Kingdom
Resumo:
We consider optimal monetary and scal policies in a New Keynesian model of a small open economy with sticky prices and wages. In this benchmark setting monetary policy is all we need - analytical results demonstrate that variations in government spending should play no role in the stabilization of shocks. In extensions we show, rstly, that this is even when true when allowing for in ation inertia through backward-looking rule-of-thumb price and wage-setting, as long as there is no discrepancy between the private and social evaluation of the marginal rate of substitution between consumption and leisure. Secondly, the optimal neutrality of government spending is robust to the issuance of public debt. In the presence of debt government spending will deviate from the optimal steady-state but only to the extent required to cover the deficit, not to provide any additional macroeconomic stabilization. However, unlike government spending variations in tax rates can play a complementary role to monetary policy, as they change relative prices rather than demand.
Resumo:
Pricing American options is an interesting research topic since there is no analytical solution to value these derivatives. Different numerical methods have been proposed in the literature with some, if not all, either limited to a specific payoff or not applicable to multidimensional cases. Applications of Monte Carlo methods to price American options is a relatively new area that started with Longstaff and Schwartz (2001). Since then, few variations of that methodology have been proposed. The general conclusion is that Monte Carlo estimators tend to underestimate the true option price. The present paper follows Glasserman and Yu (2004b) and proposes a novel Monte Carlo approach, based on designing "optimal martingales" to determine stopping times. We show that our martingale approach can also be used to compute the dual as described in Rogers (2002).
Resumo:
We study a psychologically based foundation for choice errors. The decision maker applies a preference ranking after forming a 'consideration set' prior to choosing an alternative. Membership of the consideration set is determined both by the alternative specific salience and by the rationality of the agent (his general propensity to consider all alternatives). The model turns out to include a logit formulation as a special case. In general, it has a rich set of implications both for exogenous parameters and for a situation in which alternatives can a¤ect their own salience (salience games). Such implications are relevant to assess the link between 'revealed' preferences and 'true' preferences: for example, less rational agents may paradoxically express their preference through choice more truthfully than more rational agents.
Resumo:
Robust decision making implies welfare costs or robustness premia when the approximating model is the true data generating process. To examine the importance of these premia at the aggregate level we employ a simple two-sector dynamic general equilibrium model with human capital and introduce an additional form of precautionary behavior. The latter arises from the robust decision maker s ability to reduce the effects of model misspecification through allocating time and existing human capital to this end. We find that the extent of the robustness premia critically depends on the productivity of time relative to that of human capital. When the relative efficiency of time is low, despite transitory welfare costs, there are gains from following robust policies in the long-run. In contrast, high relative productivity of time implies misallocation costs that remain even in the long-run. Finally, depending on the technology used to reduce model uncertainty, we fi nd that while increasing the fear of model misspecfi cation leads to a net increase in precautionary behavior, investment and output can fall.
Resumo:
Spatial econometrics has been criticized by some economists because some model specifications have been driven by data-analytic considerations rather than having a firm foundation in economic theory. In particular this applies to the so-called W matrix, which is integral to the structure of endogenous and exogenous spatial lags, and to spatial error processes, and which are almost the sine qua non of spatial econometrics. Moreover it has been suggested that the significance of a spatially lagged dependent variable involving W may be misleading, since it may be simply picking up the effects of omitted spatially dependent variables, incorrectly suggesting the existence of a spillover mechanism. In this paper we review the theoretical and empirical rationale for network dependence and spatial externalities as embodied in spatially lagged variables, arguing that failing to acknowledge their presence at least leads to biased inference, can be a cause of inconsistent estimation, and leads to an incorrect understanding of true causal processes.
Resumo:
‘Modern’ Phillips curve theories predict inflation is an integrated, or near integrated, process. However, inflation appears bounded above and below in developed economies and so cannot be ‘truly’ integrated and more likely stationary around a shifting mean. If agents believe inflation is integrated as in the ‘modern’ theories then they are making systematic errors concerning the statistical process of inflation. An alternative theory of the Phillips curve is developed that is consistent with the ‘true’ statistical process of inflation. It is demonstrated that United States inflation data is consistent with the alternative theory but not with the existing ‘modern’ theories.
Resumo:
This paper seeks to identify whether there is a representative empirical Okun’s Law coefficient (OLC) and to measure its size. We carry out a meta regression analysis on a sample of 269 estimates of the OLC to uncover reasons for differences in empirical results and to estimate the ‘true’ OLC. On statistical (and other) grounds, we find it appropriate to investigate two separate subsamples, using respectively (some measure of) unemployment or output as dependent variable. Our results can be summarized as follows. First, there is evidence of type II publication bias in both sub-samples, but a type I bias is present only among the papers using some measure of unemployment as the dependent variable. Second, after correction for publication bias, authentic and statistically significant OLC effects are present in both sub-samples. Third, bias-corrected estimated true OLCs are significantly lower (in absolute value) with models using some measure of unemployment as the dependent variable. Using a bivariate MRA approach, the estimated true effects are -0.25 for the unemployment sub-sample and -0.61 for the output-sub sample; with a multivariate MRA methodology, the estimated true effects are -0.40 and -1.02 for the unemployment and the output-sub samples respectively.
Resumo:
Adverse selection may thwart trade between an informed seller, who knows the probability p that an item of antiquity is genuine, and an uninformed buyer, who does not know p. The buyer might not be wholly uninformed, however. Suppose he can perform a simple inspection, a test of his own: the probability that an item passes the test is g if the item is genuine, but only f < g if it is fake. Given that the buyer is no expert, his test may have little power: f may be close to g. Unfortunately, without much power, the buyer's test will not resolve the difficulty of adverse selection; gains from trade may remain unexploited. But now consider a "store", where the seller groups a number of items, perhaps all with the same quality, the same probability p of being genuine. (We show that in equilibrium the seller will choose to group items in this manner.) Now the buyer can conduct his test across a large sample, perhaps all, of a group of items in the seller's store. He can thereby assess the overall quality of these items; he can invert the aggregate of his test results to uncover the underlying p; he can form a "prior". There is thus no longer asymmetric information between seller and buyer: gains from trade can be exploited. This is our theory of retailing: by grouping items together - setting up a store - a seller is able to supply buyers with priors, as well as the items themselves. We show that the weaker the power of the buyer�s test (the closer f is to g), the greater the seller�s profit. So the seller has no incentive to assist the buyer � e.g., by performing her own tests on the items, or by cleaning them to reveal more about their true age. The paper ends with an analysis of which sellers should specialise in which qualities. We show that quality will be low in busy locations and high in expensive locations.
Resumo:
This paper compares how increases in experience versus increases in knowledge about a public good affect willingness to pay (WTP) for its provision. This is challenging because while consumers are often certain about their previous experiences with a good, they may be uncertain about the accuracy of their knowledge. We therefore design and conduct a field experiment in which treated subjects receive a precise and objective signal regarding their knowledge about a public good before estimating their WTP for it. Using data for two different public goods, we show qualitative equivalence of the effect of knowledge and experience on valuation for a public good. Surprisingly, though, we find that the causal effect of objective signals about the accuracy of a subject’s knowledge for a public good can dramatically affect their valuation for it: treatment causes an increase of $150-$200 in WTP for well-informed individuals. We find no such effect for less informed subjects. Our results imply that WTP estimates for public goods are not only a function of true information states of the respondents but beliefs about those information states.
Resumo:
Using quarterly data for the U.K. from 1993 through 2012, we document that the extent of worker reallocation across occupations or industries (a career change, in the parlance of this paper) is high and procyclical. This holds true after controlling for workers' previous labour market status and for changes in the composition of who gets hired over the business cycle. Our evidence suggests that a large part of this reallocation reflect excess churning in the labour market. We also find that the majority of career changes come with wage increases. During the economic expansion wage increases were typically larger for those who change careers than for those who do not. During the recession this is not true for career changers who were hired from unemployment. Our evidence suggests that understanding career changes over the business cycle is important for explaining labour market ows and the cyclicality of wage growth.
Resumo:
This paper develops a new test of true versus spurious long memory, based on log-periodogram estimation of the long memory parameter using skip-sampled data. A correction factor is derived to overcome the bias in this estimator due to aliasing. The procedure is designed to be used in the context of a conventional test of significance of the long memory parameter, and composite test procedure described that has the properties of known asymptotic size and consistency. The test is implemented using the bootstrap, with the distribution under the null hypothesis being approximated using a dependent-sample bootstrap technique to approximate short-run dependence following fractional differencing. The properties of the test are investigated in a set of Monte Carlo experiments. The procedure is illustrated by applications to exchange rate volatility and dividend growth series.