23 resultados para Stochastic Dominance
em Université de Montréal, Canada
Resumo:
We study the problem of provision and cost-sharing of a public good in large economies where exclusion, complete or partial, is possible. We search for incentive-constrained efficient allocation rules that display fairness properties. Population monotonicity says that an increase in population should not be detrimental to anyone. Demand monotonicity states that an increase in the demand for the public good (in the sense of a first-order stochastic dominance shift in the distribution of preferences) should not be detrimental to any agent whose preferences remain unchanged. Under suitable domain restrictions, there exists a unique incentive-constrained efficient and demand-monotonic allocation rule: the so-called serial rule. In the binary public good case, the serial rule is also the only incentive-constrained efficient and population-monotonic rule.
Resumo:
L’objectif de cette étude est de réaliser une analyse comparative de la pauvreté et de la structure de consommation des ménages des capitales des Etats membres de l’Union Economique et Monétaire Ouest Africaine (UEMOA) à partir des données des enquêtes dépenses des ménages (EDM) réalisées en 2008 dans ces agglomérations. En s’appuyant sur l’approche monétaire de la pauvreté mise en oeuvre par la méthode du coût des besoins essentiels, il ressort que plus d’un ménage sur 10, soit 10,5% de l’ensemble de la population étudiée vit sous le seuil de pauvreté estimé à 277450 F CFA de dépenses annuelles par unité de consommation. Le test de dominance stochastique de 1er ordre confirme que l’ampleur du phénomène est en moyenne plus importante dans les villes sahéliennes (Bamako, Niamey, Ouagadougou) que dans les grandes villes côtières (Abidjan, Dakar, Lomé). Par ailleurs, l’analyse économétrique révèle que la taille du ménage et le capital humain du chef de ménage sont des déterminants importants du niveau de vie monétaire des ménages. En effet, tandis que le risque de pauvreté est plus élevé chez les ménages de grande taille, le niveau de vie d’un ménage est d’autant plus élevé que le niveau d’instruction du chef est important. En outre, on observe que dans les agglomérations où le taux de pauvreté est le plus élevé, les ménages accordent un poids plus élevé aux dépenses alimentaires. Enfin, l’estimation des élasticités dépenses totales de la demande à l’aide d’une régression linéaire suggère qu’en moyenne, les besoins de consommation insatisfaits par les ménages portent sur les services (les transports, la communication, la santé, l’éducation et les loisirs).
Resumo:
Dans ce mémoire, je considère un modèle de sélection standard avec sélection non aléatoire. D’abord, je discute la validité et la ‘‘sharpness ’’ des bornes sur l’intervalle interquantile de la distribution de la variable aléatoire latente non censurée, dérivées par Blundell et al. (2007). Ensuite, je dérive les bornes ‘‘sharp ’’ sur l’intervalle interquantile lorsque la distribution observée domine stochastiquement au premier ordre celle non observée. Enfin, je discute la ‘‘sharpness’’ des bornes sur la variance de la distribution de la variable latente, dérivées par Stoye (2010). Je montre que les bornes sont valides mais pas nécessairement ‘‘sharp’’. Je propose donc des bornes inférieures ‘‘sharp’’ pour la variance et le coefficient de variation de ladite distribution.
Asymmetry Risk, State Variables and Stochastic Discount Factor Specification in Asset Pricing Models
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.
Resumo:
The paper investigates the pricing of derivative securities with calendar-time maturities.
Resumo:
This paper prepared for the Handbook of Statistics (Vol.14: Statistical Methods in Finance), surveys the subject of stochastic volatility. the following subjects are covered: volatility in financial markets (instantaneous volatility of asset returns, implied volatilities in option prices and related stylized facts), statistical modelling in discrete and continuous time and, finally, statistical inference (methods of moments, quasi-maximum likelihood, likelihood-based and bayesian methods and indirect inference).
Resumo:
This paper studies the theoretical and empirical implications of monetary policy making by committee under three different voting protocols. The protocols are a consensus model, where super-majority is required for a policy change; an agenda-setting model, where the chairman controls the agenda; and a simple majority model, where policy is determined by the median member. These protocols give preeminence to different aspects of the actual decision making process and capture the observed heterogeneity in formal procedures across central banks. The models are estimated by Maximum Likehood using interest rate decisions by the committees of five central banks, namely the Bank of Canada, the Bank of England, the European Central Bank, the Swedish Riksbank, and the U.S. Federal Reserve. For all central banks, results indicate that the consensus model is statically superior to the alternative models. This suggests that despite institutionnal differences, committees share unwritten rules and informal procedures that deliver observationally equivalent policy decisions.
Resumo:
Malgré le fait que le statut social soit reconnu comme ayant une forte influence sur l’aptitude, les facteurs affectant le statut social et les changements de ce statut demeurent peu connus. De plus, les études sur la dominance ayant un lien avec l’agressivité portent rarement sur des femelles. Nous étudierons ces aspects en utilisant Neolamprologus pulcher, un poisson à reproduction coopérative du lac Tanganyika. La probabilité d’ascension sociale était manipulée sur le terrain et les changements physiologiques et comportementaux, ainsi que le niveau plasmatique de testostérone, associé avec l’ascension à la dominance de femelles subordonnées étaient caractérisés. Le degré de coopération et la masse étaient supérieurs chez les femelles ascendantes par rapport aux femelles non-ascendantes d’un même groupe social. Après une semaine d’ascension sociale, les femelles ascendantes ne différaient pas comportementalement, mais différaient physiologiquement des femelles dominantes. Les femelles dominantes, ascendantes et subordonnées ne différaient pas quant au niveau de testostérone plasmatique. Comprendre les bénéfices des comportements coopératifs pour les subordonnés a longtemps posé un problème évolutif. Nos résultats impliquent que les comportements coûteux métaboliquement peuvent avoir été sélectionnés en améliorant l’aptitude future via l’héritage du territoire et du statut social. De plus, le degré de coopération pourrait être un signal de qualité détecté par les compétiteurs et les collaborateurs.