776 resultados para Fonction radicale
Resumo:
Présenté au 34e congrès de la Corporation des bibliothécaires professionnels du Québec (CBPQ).
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
This paper develops and estimates a game-theoretical model of inflation targeting where the central banker's preferences are asymmetric around the targeted rate. In particular, positive deviations from the target can be weighted more, or less, severely than negative ones in the central banker's loss function. It is shown that some of the previous results derived under the assumption of symmetry are not robust to the generalization of preferences. Estimates of the central banker's preference parameters for Canada, Sweden, and the United Kingdom are statistically different from the ones implied by the commonly used quadratic loss function. Econometric results are robust to different forecasting models for the rate of unemployment but not to the use of measures of inflation broader than the one targeted.
Resumo:
This paper studies monetary policy in an economy where the central banker's preferences are asymmetric around optimal inflation. In particular, positive deviations from the optimum can be weighted more, or less, severely than negative deviations in the policy maker's loss function. It is shown that under asymmetric preferences, uncertainty can induce a prudent behavior on the part of the central banker. Since the prudence motive can be large enough to override the inflation bias, optimal monetary policy could be implemented even in the absence of rules, reputation, or contractual mechanisms. For certain parameter values, a deflationary bias can arise in equilibrium.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
The rationalizability of a choice function by means of a transitive relation has been analyzed thoroughly in the literature. However, not much seems to be known when transitivity is weakened to quasi-transitivity or acyclicity. We describe the logical relationships between the different notions of rationalizability involving, for example, the transitivity, quasi-transitivity, or acyclicity of the rationalizing relation. Furthermore, we discuss sufficient conditions and necessary conditions for rational choice on arbitrary domains. Transitive, quasi-transitive, and acyclical rationalizability are fully characterized for domains that contain all singletons and all two-element subsets of the universal set.
Resumo:
We consider a probabilistic approach to the problem of assigning k indivisible identical objects to a set of agents with single-peaked preferences. Using the ordinal extension of preferences, we characterize the class of uniform probabilistic rules by Pareto efficiency, strategy-proofness, and no-envy. We also show that in this characterization no-envy cannot be replaced by anonymity. When agents are strictly risk averse von-Neumann-Morgenstern utility maximizers, then we reduce the problem of assigning k identical objects to a problem of allocating the amount k of an infinitely divisible commodity.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
In spatial environments, we consider social welfare functions satisfying Arrow's requirements. i.e., weak Pareto and independence of irrelevant alternatives. When the policy space os a one-dimensional continuum, such a welfare function is determined by a collection of 2n strictly quasi-concave preferences and a tie-breaking rule. As a corrollary, we obtain that when the number of voters is odd, simple majority voting is transitive if and only if each voter's preference is strictly quasi-concave. When the policy space is multi-dimensional, we establish Arrow's impossibility theorem. Among others, we show that weak Pareto, independence of irrelevant alternatives, and non-dictatorship are inconsistent if the set of alternatives has a non-empty interior and it is compact and convex.
Resumo:
The focus of the paper is the nonparametric estimation of an instrumental regression function P defined by conditional moment restrictions stemming from a structural econometric model : E[Y-P(Z)|W]=0 and involving endogenous variables Y and Z and instruments W. The function P is the solution of an ill-posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyses identification and overidentification of this model and presents asymptotic properties of the estimated nonparametric instrumental regression function.
Resumo:
Consistency of a binary relation requires any preference cycle to involve indifference only. As shown by Suzumura (1976b), consistency is necessary and sufficient for the existence of an ordering extension of a relation. Because of this important role of consistency, it is of interest to examine the rationalizability of choice functions by means of consistent relations. We describe the logical relationships between the different notions of rationalizability obtained if reflexivity or completeness are added to consistency, both for greatest-element rationalizability and for maximal-element rationalizability. All but one notion of consistent rationalizability are characterized for general domains, and all of them are characterized for domains that contain all two-element subsets of the universal set.
Resumo:
Ce Texte Constitue un Survol des Differentes Approches Destines a Mesurer le Progres Technique. Nous Utilisons une Notation Uniforme Tout au Long des Demonstrations Mathematiques et Nous Faisons Ressortir les Hypotheses Qui Rendent L'application des Methodes Proposees Envisageable et Qui En Limitent la Portee. les Diverses Approches Sont Regroupees D'apres une Classification Suggeree Par Diewert (1981) Selon Laquelle Deux Groupes Sont a Distinguer. le Premier Groupe Contient Toutes les Methodes Definissant le Progres Technique Comme le Taux de Croissance D'un Indice des Outputs Divise Par un Indice des Inputs (Approche de Divisia). L'autre Groupe Inclut Toutes les Methodes Definissant le Progres Technique Comme Etant le Deplacement D'une Fonction Representant la Technologie (Production, Cout, Distance). Ce Second Groupe Est Subdivise Entre L'approche Econometrique,La Theorie des Nombres Indices et L 'Approche Non Parametrique. une Liste des Pricipaux Economistes a Qui L'on Doit les Diverses Approches Est Fournie. Cependant Ce Survol Est Suffisamment Detaille Pour Etre Lu Sans Se Referer aux Articles Originaux.
Resumo:
Dans Cet Article J'ai Cherche a Demontrer les Liens Qui Existent Entre la Theorie Quantitative de la Monnaie, la Theorie du \"Markup\" et L'inflation. Bien Qu'il Ne Soit Pas Necessaire D'admettre L'equilibre et les Courbes Is-Lm, Ma Theorie du Capital Fictif Est Compatible Avec le Q de Tobin. le Principal Avantage de la Theorie du \"Markup\" Flexible Est de Montrer Comment L'inflation Est Fonction Non Seulement du Prix et de la Productivite du Travail, Mais Aussi du Prix de la Productivite du Capital, de Son Taux D'amortissement et de Son Taux de Financement. les Nouveaux Resultats Econometriques Obtenus a Partir des Donnees Annuelles Canadiennes Illustrent Hors de Tout Doute le Bien Fonde de la Relation Entre Capital Fictif et Inflation.
Resumo:
In this article we study the effect of uncertainty on an entrepreneur who must choose the capacity of his business before knowing the demand for his product. The unit profit of operation is known with certainty but there is no flexibility in our one-period framework. We show how the introduction of global uncertainty reduces the investment of the risk neutral entrepreneur and, even more, that the risk averse one. We also show how marginal increases in risk reduce the optimal capacity of both the risk neutral and the risk averse entrepreneur, without any restriction on the concave utility function and with limited restrictions on the definition of a mean preserving spread. These general results are explained by the fact that the newsboy has a piecewise-linear, and concave, monetary payoff witha kink endogenously determined at the level of optimal capacity. Our results are compared with those in the two literatures on price uncertainty and demand uncertainty, and particularly, with the recent contributions of Eeckhoudt, Gollier and Schlesinger (1991, 1995).
Resumo:
La présente étude analyse les effets dynamiques de la dévaluation du franc CFA, à l’aide d’un modèle monétaire d’équilibre général, intertemporel et multisectoriel. L’accent est particulièrement mis sur les interactions entre dévaluation et accumulation du capital. Dans le modèle, les effets du changement de parité passent par le marché du travail qui se caractérise par l’inertie du salaire nominal. Les résultats montrent que la dévaluation relance l’investissement, avec des effets expansionnistes sur l’activité économique. Le choc monétaire n’a eu qu’un impact limité sur les soldes budgétaire et commercial. Une mesure d’accompagnement telle que la réduction des salaires de la fonction publique améliore ces deux soldes mais déclenche un processus récessif.