985 resultados para Modèle SDSM
Resumo:
A full understanding of public affairs requires the ability to distinguish between the policies that voters would like the government to adopt, and the influence that different voters or group of voters actually exert in the democratic process. We consider the properties of a computable equilibrium model of a competitive political economy in which the economic interests of groups of voters and their effective influence on equilibrium policy outcomes can be explicitly distinguished and computed. The model incorporates an amended version of the GEMTAP tax model, and is calibrated to data for the United States for 1973 and 1983. Emphasis is placed on how the aggregation of GEMTAP households into groups within which economic and political behaviour is assumed homogeneous affects the numerical representation of interests and influence for representative members of each group. Experiments with the model suggest that the changes in both interests and influence are important parts of the story behind the evolution of U.S. tax policy in the decade after 1973.
Resumo:
This paper exploits the term structure of interest rates to develop testable economic restrictions on the joint process of long-term interest rates and inflation when the latter is subject to a targeting policy by the Central Bank. Two competing models that econometrically describe agents’ inferences about inflation targets are developed and shown to generate distinct predictions on the behavior of interest rates. In an empirical application to the Canadian inflation target zone, results indicate that agents perceive the band to be substantially narrower than officially announced and asymmetric around the stated mid-point. The latter result (i) suggests that the monetary authority attaches different weights to positive and negative deviations from the central target, and (ii) challenges on empirical grounds the assumption, frequently made in the literature, that the policy maker’s loss function is symmetric (usually a quadratic function) around a desired inflation value.
Resumo:
We characterize the solution to a model of consumption smoothing using financing under non-commitment and savings. We show that, under certain conditions, these two different instruments complement each other perfectly. If the rate of time preference is equal to the interest rate on savings, perfect smoothing can be achieved in finite time. We also show that, when random revenues are generated by periodic investments in capital through a concave production function, the level of smoothing achieved through financial contracts can influence the productive investment efficiency. As long as financial contracts cannot achieve perfect smoothing, productive investment will be used as a complementary smoothing device.
Resumo:
In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural-change models. Secondly, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. We present simulation results pertaining to the behavior of the estimators and tests in finite samples. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program available upon request for non-profit academic use.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
This paper considers various asymptotic approximations in the near-integrated firstorder autoregressive model with a non-zero initial condition. We first extend the work of Knight and Satchell (1993), who considered the random walk case with a zero initial condition, to derive the expansion of the relevant joint moment generating function in this more general framework. We also consider, as alternative approximations, the stochastic expansion of Phillips (1987c) and the continuous time approximation of Perron (1991). We assess how these alternative methods provide or not an adequate approximation to the finite-sample distribution of the least-squares estimator in a first-order autoregressive model. The results show that, when the initial condition is non-zero, Perron's (1991) continuous time approximation performs very well while the others only offer improvements when the initial condition is zero.
Resumo:
This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
We reconsider the discrete version of the axiomatic cost-sharing model. We propose a condition of (informational) coherence requiring that not all informational refinements of a given problem be solved differently from the original problem. We prove that strictly coherent linear cost-sharing rules must be simple random-order rules.
Resumo:
This article studies mobility patterns of German workers in light of a model of sector-specific human capital. Furthermore, I employ and describe little-used data on continuous on-the-job training occurring after apprenticeships. Results are presented describing the incidence and duration of continuous training. Continuous training is quite common, despite the high incidence of apprenticeships which precedes this part of a worker's career. Most previous studies have only distinguished between firm-specific and general human capital, usually concluding that training was general. Inconsistent with those conclusions, I show that German men are more likely to find a job within the same sector if they have received continuous training in that sector. These results are similar to those obtained for young U.S. workers, and suggest that sector-specific capital is an important feature of very different labor markets. In addition, they suggest that the observed effect of training on mobility is sensible to the state of the business cycle, indicating a more complex interaction between supply and demand that most theoretical models allow for.
Resumo:
Using data from the National Longitudinal Survey of Youth (NLSY), we re-examine the effect of formal on-the-job training on mobility patterns of young American workers. By employing parametric duration models, we evaluate the economic impact of training on productive time with an employer. Confirming previous studies, we find a positive and statistically significant impact of formal on-the-job training on tenure with the employer providing the training. However, the expected net duration of the time spent in the training program is generally not significantly increased. We proceed to document and analyze intra-sectoral and cross-sectoral mobility patterns in order to infer whether training provides firm-specific, industry-specific, or general human capital. The econometric analysis rejects a sequential model of job separation in favor of a competing risks specification. We find significant evidence for the industry-specificity of training. The probability of sectoral mobility upon job separation decreases with training received in the current industry, whether with the last employer or previous employers, and employment attachment increases with on-the-job training. These results are robust to a number of variations on the base model.
Resumo:
We examine the relationship between the risk premium on the S&P 500 index return and its conditional variance. We use the SMEGARCH - Semiparametric-Mean EGARCH - model in which the conditional variance process is EGARCH while the conditional mean is an arbitrary function of the conditional variance. For monthly S&P 500 excess returns, the relationship between the two moments that we uncover is nonlinear and nonmonotonic. Moreover, we find considerable persistence in the conditional variance as well as a leverage effect, as documented by others. Moreover, the shape of these relationships seems to be relatively stable over time.
Resumo:
This paper examines empirically the effects of distortionary taxation on labor supply using a general equilibrium framework. The long-term relations predicted by the model are derived and tested using Canadian data between 1966 and 1993. While the cointegrating predictions of the model without taxation are rejected, the ones of the model with labor taxation are not. Persistent labor tax rate increases appear to play an important role in the observed downward trend in hours worked.
Resumo:
This paper studies the proposition that an inflation bias can arise in a setup where a central banker with asymmetric preferences targets the natural unemployment rate. Preferences are asymmetric in the sense that positive unemployment deviations from the natural rate are weighted more (or less) severely than negative deviations in the central banker's loss function. The bias is proportional to the conditional variance of unemployment. The time-series predictions of the model are evaluated using data from G7 countries. Econometric estimates support the prediction that the conditional variance of unemployment and the rate of inflation are positively related.
Resumo:
This paper proposes a model of natural-resource exploitation when private ownership requires costly enforcement activities. For a given wage rate, it is shown how enforcement costs can increase with labor's average productivity on a resource site. As a result, it is never optimal for the site owner to produce at the point where marginal productivity equals the wage rate. It may even be optimal to exploit at a point exhibiting negative marginal returns. An important parameter in the analysis is the prevailing wage rate. When wages are low, further decreases in the wage rates can reduce the returns from resource exploitation. At sufficiently low wages, positive returns can be rendered impossible to achieve and the site is abandoned to a free-access exploitation. The analysis provides some clues as to why property rights may be more difficult to delineate in less developed countries. It proposes a different framework from which to address normative issues such as the desirability of free trade with endogenous enforcement costs, the optimality of private decisions to enforce property rights, the effect of income distribution on property rights enforceability, etc.