965 resultados para Approche combinatoire
Resumo:
In this paper, we provide both qualitative and quantitative measures of the cost of measuring the integrated volatility by the realized volatility when the frequency of observation is fixed. We start by characterizing for a general diffusion the difference between the realized and the integrated volatilities for a given frequency of observations. Then, we compute the mean and variance of this noise and the correlation between the noise and the integrated volatility in the Eigenfunction Stochastic Volatility model of Meddahi (2001a). This model has, as special examples, log-normal, affine, and GARCH diffusion models. Using some previous empirical works, we show that the standard deviation of the noise is not negligible with respect to the mean and the standard deviation of the integrated volatility, even if one considers returns at five minutes. We also propose a simple approach to capture the information about the integrated volatility contained in the returns through the leverage effect.
Resumo:
We consider a probabilistic approach to the problem of assigning k indivisible identical objects to a set of agents with single-peaked preferences. Using the ordinal extension of preferences, we characterize the class of uniform probabilistic rules by Pareto efficiency, strategy-proofness, and no-envy. We also show that in this characterization no-envy cannot be replaced by anonymity. When agents are strictly risk averse von-Neumann-Morgenstern utility maximizers, then we reduce the problem of assigning k identical objects to a problem of allocating the amount k of an infinitely divisible commodity.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
Ce Texte Constitue un Survol des Differentes Approches Destines a Mesurer le Progres Technique. Nous Utilisons une Notation Uniforme Tout au Long des Demonstrations Mathematiques et Nous Faisons Ressortir les Hypotheses Qui Rendent L'application des Methodes Proposees Envisageable et Qui En Limitent la Portee. les Diverses Approches Sont Regroupees D'apres une Classification Suggeree Par Diewert (1981) Selon Laquelle Deux Groupes Sont a Distinguer. le Premier Groupe Contient Toutes les Methodes Definissant le Progres Technique Comme le Taux de Croissance D'un Indice des Outputs Divise Par un Indice des Inputs (Approche de Divisia). L'autre Groupe Inclut Toutes les Methodes Definissant le Progres Technique Comme Etant le Deplacement D'une Fonction Representant la Technologie (Production, Cout, Distance). Ce Second Groupe Est Subdivise Entre L'approche Econometrique,La Theorie des Nombres Indices et L 'Approche Non Parametrique. une Liste des Pricipaux Economistes a Qui L'on Doit les Diverses Approches Est Fournie. Cependant Ce Survol Est Suffisamment Detaille Pour Etre Lu Sans Se Referer aux Articles Originaux.
Resumo:
Le But de Ce Rapport Est de Presenter L'approche Utilisee Par les Auteurs Pour Effectuer des Previsions a Long Terme du Trafic de Conteneurs Outre-Mer, Pour le Port de Montreal. Cette Approche Suppose D'abord L'estimation du Trafic de Conteneurs Par Categories de Marchandise, Par Origine et Destination, au Cours des Annees Recentes. Ensuite, Nous Avons Obtenu des Previsions du Trafic de Conteneurs Pour 1995, En Nous Basant Sur des Anticipations Relatives aux Tendances Generales du Commerce Exterieur Canadien et a la Composition de Ces Echanges, Par Groupes de Marchandises. Nous Avons Egalement du Effectuer des Projections Sur L'evolution Probable des Taux de Conteneurisation, En Tenant Compte des Diverses Marchandises et Egalement des Partenaires Commerciaux Impliques. Nous Avons Aussi Considere L'evolution Possible des Frontieres de la Zone D'influence (\"Hinterland\") du Port de Montreal. L'importance du Trafic Genere Par le Midwest des Etats Unis a Augmente Considerablement au Cours de la Derniere Decennie, a Cause D'un Certain Nombre de Facteurs Institutionnels. Nos Previsions du Trafic de Conteneurs, Pour le Port de Montreal, Dependent Donc,En Grande Partie, de L'eventualite Que le Midwest des Etats Unis Demeure Dans la Zone D'influence du Port de Montreal. Finalement, Nous Presentons Deux Scenarios de Previsions. le Premier de Ces Scenarios Suppose Que la Position Concurrentielle Actuelle du Port de Montreal Demeure Virtuellement Inchangee. le Second Scenario Suppose la Disparition D'une Importante Entreprise de Transport de Conteneurs, Situee a Montreal.
Resumo:
Cet Article Se Divise En Trois Sections: la Premiere Contient une Critique de Certains Post-Keynesiens, Notamment Americains, Concernant Leur Hypothese de la Rigidite du Markup Dans la Relation Prix-Salaires. Contrairement a Kalecki Qui Admettait Volontiers la Flexibilite du Markup Face a la Rigidite des Prix et des Salaires, on Demontre Que la Position des Post-Keynesiens Americains N'est Qu'une Variante de L'approche Monetariste. la Deuxieme Section Contient une Generalisation de L'hypothese de la Variabilite du Markup Causee Par Tous les Facteurs de Production, Notamment la Variabilite Provenant des Mouvements de Capital Reel et Financier. En Utilisant les Donnees Annuelles Pour L'ensemble des Industries Canadienne Entre 1973 et 1982, on Estime Dans la Derniere Section Que les Couts Salariaux au Canada Ne Sont Responsables Que de 16% de L'inflation au Cours de la Derniere Decennie Alors Que les Couts du Capital Reel et Financier Compte Pour 84% de la Hausse des Prix.
Resumo:
This article presents a review of the stabilization attempts in Argentina, Brazil, and Israel during the 1980’s. Earlier research is summarized and complemented with additional sources of contemporaneous information and a detailed analysis of institutional features. The examination of these episodes underscores the strong economic and empirical relationship between the governments’ fiscal policy and the rate of inflation.
Resumo:
This paper develops a model of money demand where the opportunity cost of holding money is subject to regime changes. The regimes are fully characterized by the mean and variance of inflation and are assumed to be the result of alternative government policies. Agents are unable to directly observe whether government actions are indeed consistent with the inflation rate targeted as part of a stabilization program but can construct probability inferences on the basis of available observations of inflation and money growth. Government announcements are assumed to provide agents with additional, possibly truthful information regarding the regime. This specification is estimated and tested using data from the Israeli and Argentine high inflation periods. Results indicate the successful stabilization program implemented in Israel in July 1985 was more credible than either the earlier Israeli attempt in November 1984 or the Argentine programs. Government’s signaling might substantially simplify the inference problem and increase the speed of learning on the part of the agents. However, under certain conditions, it might increase the volatility of inflation. After the introduction of an inflation stabilization plan, the welfare gains from a temporary increase in real balances might be high enough to induce agents to raise their real balances in the short-term, even if they are uncertain about the nature of government policy and the eventual outcome of the stabilization attempt. Statistically, the model restrictions cannot be rejected at the 1% significance level.
Resumo:
We study the problem of measuring the uncertainty of CGE (or RBC)-type model simulations associated with parameter uncertainty. We describe two approaches for building confidence sets on model endogenous variables. The first one uses a standard Wald-type statistic. The second approach assumes that a confidence set (sampling or Bayesian) is available for the free parameters, from which confidence sets are derived by a projection technique. The latter has two advantages: first, confidence set validity is not affected by model nonlinearities; second, we can easily build simultaneous confidence intervals for an unlimited number of variables. We study conditions under which these confidence sets take the form of intervals and show they can be implemented using standard methods for solving CGE models. We present an application to a CGE model of the Moroccan economy to study the effects of policy-induced increases of transfers from Moroccan expatriates.
Resumo:
This paper addresses the issue of estimating semiparametric time series models specified by their conditional mean and conditional variance. We stress the importance of using joint restrictions on the mean and variance. This leads us to take into account the covariance between the mean and the variance and the variance of the variance, that is, the skewness and kurtosis. We establish the direct links between the usual parametric estimation methods, namely, the QMLE, the GMM and the M-estimation. The ususal univariate QMLE is, under non-normality, less efficient than the optimal GMM estimator. However, the bivariate QMLE based on the dependent variable and its square is as efficient as the optimal GMM one. A Monte Carlo analysis confirms the relevance of our approach, in particular, the importance of skewness.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
This paper studies the interdependence between fiscal and monetary policies, and their joint role in the determination of the price level. The government is characterized by a long-run fiscal policy rule whereby a given fraction of the outstanding debt, say d, is backed by the present discounted value of current and future primary surpluses. The remaining debt is backed by seigniorage revenue. The parameter d characterizes the interdependence between fiscal and monetary authorities. It is shown that in a standard monetary economy, this policy rule implies that the price level depends not only on the money stock, but also on the proportion of debt that is backed with money. Empirical estimates of d are obtained for OECD countries using data on nominal consumption, monetary base, and debt. Results indicate that debt plays only a minor role in the determination of the price level in these economies. Estimates of d correlate well with institutional measures of central bank independence.
Resumo:
McCausland (2004a) describes a new theory of random consumer demand. Theoretically consistent random demand can be represented by a \"regular\" \"L-utility\" function on the consumption set X. The present paper is about Bayesian inference for regular L-utility functions. We express prior and posterior uncertainty in terms of distributions over the indefinite-dimensional parameter set of a flexible functional form. We propose a class of proper priors on the parameter set. The priors are flexible, in the sense that they put positive probability in the neighborhood of any L-utility function that is regular on a large subset bar(X) of X; and regular, in the sense that they assign zero probability to the set of L-utility functions that are irregular on bar(X). We propose methods of Bayesian inference for an environment with indivisible goods, leaving the more difficult case of indefinitely divisible goods for another paper. We analyse individual choice data from a consumer experiment described in Harbaugh et al. (2001).