7 resultados para implied volatility function models
em University of Connecticut - USA
Resumo:
This paper empirically assesses whether monetary policy affects real economic activity through its affect on the aggregate supply side of the macroeconomy. Analysts typically argue that monetary policy either does not affect the real economy, the classical dichotomy, or only affects the real economy in the short run through aggregate demand new Keynesian or new classical theories. Real business cycle theorists try to explain the business cycle with supply-side productivity shocks. We provide some preliminary evidence about how monetary policy affects the aggregate supply side of the macroeconomy through its affect on total factor productivity, an important measure of supply-side performance. The results show that monetary policy exerts a positive and statistically significant effect on the supply-side of the macroeconomy. Moreover, the findings buttress the importance of countercyclical monetary policy as well as support the adoption of an optimal money supply rule. Our results also prove consistent with the effective role of monetary policy in the Great Moderation as well as the more recent rise in productivity growth.
Resumo:
This paper revisits the issue of conditional volatility in real GDP growth rates for Canada, Japan, the United Kingdom, and the United States. Previous studies find high persistence in the volatility. This paper shows that this finding largely reflects a nonstationary variance. Output growth in the four countries became noticeably less volatile over the past few decades. In this paper, we employ the modified ICSS algorithm to detect structural change in the unconditional variance of output growth. One structural break exists in each of the four countries. We then use generalized autoregressive conditional heteroskedasticity (GARCH) specifications modeling output growth and its volatility with and without the break in volatility. The evidence shows that the time-varying variance falls sharply in Canada, Japan, and the U.K. and disappears in the U.S., excess kurtosis vanishes in Canada, Japan, and the U.S. and drops substantially in the U.K., once we incorporate the break in the variance equation of output for the four countries. That is, the integrated GARCH (IGARCH) effect proves spurious and the GARCH model demonstrates misspecification, if researchers neglect a nonstationary unconditional variance.
Resumo:
In applied work economists often seek to relate a given response variable y to some causal parameter mu* associated with it. This parameter usually represents a summarization based on some explanatory variables of the distribution of y, such as a regression function, and treating it as a conditional expectation is central to its identification and estimation. However, the interpretation of mu* as a conditional expectation breaks down if some or all of the explanatory variables are endogenous. This is not a problem when mu* is modelled as a parametric function of explanatory variables because it is well known how instrumental variables techniques can be used to identify and estimate mu*. In contrast, handling endogenous regressors in nonparametric models, where mu* is regarded as fully unknown, presents di±cult theoretical and practical challenges. In this paper we consider an endogenous nonparametric model based on a conditional moment restriction. We investigate identification related properties of this model when the unknown function mu* belongs to a linear space. We also investigate underidentification of mu* along with the identification of its linear functionals. Several examples are provided in order to develop intuition about identification and estimation for endogenous nonparametric regression and related models.
Resumo:
Lovell and Rouse (LR) have recently proposed a modification of the standard DEA model that overcomes the infeasibility problem often encountered in computing super-efficiency. In the LR procedure one appropriately scales up the observed input vector (scale down the output vector) of the relevant super-efficient firm thereby usually creating its inefficient surrogate. An alternative procedure proposed in this paper uses the directional distance function introduced by Chambers, Chung, and Färe and the resulting Nerlove-Luenberger (NL) measure of super-efficiency. The fact that the directional distance function combines features of both an input-oriented and an output-oriented model, generally leads to a more complete ranking of the observations than either of the oriented models. An added advantage of this approach is that the NL super-efficiency measure is unique and does not depend on any arbitrary choice of a scaling parameter. A data set on international airlines from Coelli, Perelman, and Griffel-Tatje (2002) is utilized in an illustrative empirical application.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the central bank should adopt a loss function that differs from the social loss function. Carefully designing the central bank s loss function with consistent targets can harmonize optimal and consistent policy. This desirable result emerges from two observations. First, the social loss function reflects a normative process that does not necessarily prove consistent with the structure of the microeconomy. Thus, the social loss function cannot serve as a direct loss function for the central bank. Second, an optimal loss function for the central bank must depend on the structure of that microeconomy. In addition, this paper shows that control theory provides a benchmark for institution design in a game-theoretical framework.
Resumo:
Consider a nonparametric regression model Y=mu*(X) + e, where the explanatory variables X are endogenous and e satisfies the conditional moment restriction E[e|W]=0 w.p.1 for instrumental variables W. It is well known that in these models the structural parameter mu* is 'ill-posed' in the sense that the function mapping the data to mu* is not continuous. In this paper, we derive the efficiency bounds for estimating linear functionals E[p(X)mu*(X)] and int_{supp(X)}p(x)mu*(x)dx, where p is a known weight function and supp(X) the support of X, without assuming mu* to be well-posed or even identified.
Resumo:
A problem frequently encountered in Data Envelopment Analysis (DEA) is that the total number of inputs and outputs included tend to be too many relative to the sample size. One way to counter this problem is to combine several inputs (or outputs) into (meaningful) aggregate variables reducing thereby the dimension of the input (or output) vector. A direct effect of input aggregation is to reduce the number of constraints. This, in its turn, alters the optimal value of the objective function. In this paper, we show how a statistical test proposed by Banker (1993) may be applied to test the validity of a specific way of aggregating several inputs. An empirical application using data from Indian manufacturing for the year 2002-03 is included as an example of the proposed test.