670 resultados para Validité de construit


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper tests the predictions of the Barro-Gordon model using US data on inflation and unemployment. To that end, it constructs a general game-theoretical model with asymmetric preferences that nests the Barro-Gordon model and a version of Cukierman’s model as special cases. Likelihood Ratio tests indicate that the restriction imposed by the Barro-Gordon model is rejected by the data but the one imposed by the version of Cukierman’s model is not. Reduced-form estimates are consistent with the view that the Federal Reserve weights more heavily positive than negative unemployment deviations from the expected natural rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the persistent effects of monetary shocks on output. Previous empirical literature documents this persistence, but standard general equilibrium models with sticky prices fail to generate output responses beyond the duration of nominal contracts. This paper constructs and estimates a general equilibrium model with price rigidities, habit formation, and costly capital adjustment. The model is estimated via Maximum Likelihood using US data on output, the real money stock, and the nominal interest rate. Econometric results suggest that habit formation and adjustment costs to capital play an important role in explaining the output effects of monetary policy. In particular, impulse response analysis indicates that the model generates persistent, hump-shaped output responses to monetary shocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of measuring the uncertainty of CGE (or RBC)-type model simulations associated with parameter uncertainty. We describe two approaches for building confidence sets on model endogenous variables. The first one uses a standard Wald-type statistic. The second approach assumes that a confidence set (sampling or Bayesian) is available for the free parameters, from which confidence sets are derived by a projection technique. The latter has two advantages: first, confidence set validity is not affected by model nonlinearities; second, we can easily build simultaneous confidence intervals for an unlimited number of variables. We study conditions under which these confidence sets take the form of intervals and show they can be implemented using standard methods for solving CGE models. We present an application to a CGE model of the Moroccan economy to study the effects of policy-induced increases of transfers from Moroccan expatriates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conditional heteroskedasticity is an important feature of many macroeconomic and financial time series. Standard residual-based bootstrap procedures for dynamic regression models treat the regression error as i.i.d. These procedures are invalid in the presence of conditional heteroskedasticity. We establish the asymptotic validity of three easy-to-implement alternative bootstrap proposals for stationary autoregressive processes with m.d.s. errors subject to possible conditional heteroskedasticity of unknown form. These proposals are the fixed-design wild bootstrap, the recursive-design wild bootstrap and the pairwise bootstrap. In a simulation study all three procedures tend to be more accurate in small samples than the conventional large-sample approximation based on robust standard errors. In contrast, standard residual-based bootstrap methods for models with i.i.d. errors may be very inaccurate if the i.i.d. assumption is violated. We conclude that in many empirical applications the proposed robust bootstrap procedures should routinely replace conventional bootstrap procedures for autoregressions based on the i.i.d. error assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce texte propose des méthodes d’inférence exactes (tests et régions de confiance) sur des modèles de régression linéaires avec erreurs autocorrélées suivant un processus autorégressif d’ordre deux [AR(2)], qui peut être non stationnaire. L’approche proposée est une généralisation de celle décrite dans Dufour (1990) pour un modèle de régression avec erreurs AR(1) et comporte trois étapes. Premièrement, on construit une région de confiance exacte pour le vecteur des coefficients du processus autorégressif (φ). Cette région est obtenue par inversion de tests d’indépendance des erreurs sur une forme transformée du modèle contre des alternatives de dépendance aux délais un et deux. Deuxièmement, en exploitant la dualité entre tests et régions de confiance (inversion de tests), on détermine une région de confiance conjointe pour le vecteur φ et un vecteur d’intérêt M de combinaisons linéaires des coefficients de régression du modèle. Troisièmement, par une méthode de projection, on obtient des intervalles de confiance «marginaux» ainsi que des tests à bornes exacts pour les composantes de M. Ces méthodes sont appliquées à des modèles du stock de monnaie (M2) et du niveau des prix (indice implicite du PNB) américains

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Susan Kneebone, Université Monash, Melbourne, Australie

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Département de linguistique et de traduction

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce mémoire porte sur la constitution du tiers secteur français en tant qu’acteur social et politique. Dans de nombreux pays, les relations entre l’État et les organismes mutualistes, coopératifs et associatifs de la société civile (un ensemble hétérogène qu’on appelle ici le « tiers secteur ») ont été récemment formalisées par des partenariats. En France, cette institutionnalisation s’est concrétisée en 2001 par la signature d’une Charte (CPCA). Nous explorons l’hypothèse qu’à travers l’institutionnalisation, le tiers secteur français se construit en tant qu’acteur –ayant une (ou des) identités propres de même qu’un projet de société relativement bien défini. La perspective dominante présente dans la littérature internationale traitant de l’institutionnalisation des rapports entre l’État et le tiers secteur est celle d’une instrumentalisation des organisations du tiers secteur au détriment de leurs spécificités et de leur autonomie. Cette perspective nous semble limitative, car elle semble être aveugle à la capacité d’action des organisations. Par conséquent, dans ce mémoire, nous cherchons à comprendre si une transformation identitaire a eu lieu ou est en cours, au sein du tiers secteur français, et donc s’il se transforme en acteur collectif. Pour apporter certains éléments de réponse à nos hypothèses et questions de recherche, nous avons effectué une analyse des discours via deux sources de données; des textes de réflexion rédigés par des acteurs clés du tiers secteur français et des entretiens effectués avec certains d’entre eux au printemps 2003 et à l’automne 2005. Sur la base de deux inspirations théoriques (Hobson et Lindholm, 1997 et Melucci, 1991), notre analyse a été effectuée en deux étapes. Une première phase nous a permis d’identifier deux cadres cognitifs à partir desquels se définissent les acteurs du tiers secteur français, les cadres « association » et « économie solidaire ». Une deuxième phase d’analyse consistait à déterminer si les deux cadres cognitifs pouvaient être considérés comme étant des tensions existant au sein d’un seul et même acteur collectif. Nos résultats nous permettent de conclure que les organisations du tiers secteur français ne se perçoivent pas globalement comme un ensemble unifié. Néanmoins, nous avons pu dégager certains éléments qui démontrent que les cadres sont partiellement conciliables. Cette conciliation est grandement subordonnée aux contextes sociopolitiques et économiques français, européen et international et est également conditionnelle à la découverte d’un mode de fonctionnement convenant à tous les acteurs.