960 resultados para conditional
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
L’objectif de ce papier est de déterminer les facteurs susceptibles d’expliquer les faillites bancaires au sein de l’Union économique et monétaire ouest-africaine (UEMOA) entre 1980 et 1995. Utilisant le modèle logit conditionnel sur des données en panel, nos résultats montrent que les variables qui affectent positivement la probabilité de faire faillite des banques sont : i) le niveau d’endettement auprès de la banque centrale; ii) un faible niveau de comptes disponibles et à vue; iii) les portefeuilles d’effets commerciaux par rapport au total des crédits; iv) le faible montant des dépôts à terme de plus de 2 ans à 10 ans par rapport aux actifs totaux; et v) le ratio actifs liquides sur actifs totaux. En revanche, les variables qui contribuent positivement sur la vraisemblance de survie des banques sont les suivantes : i) le ratio capital sur actifs totaux; ii) les bénéfices nets par rapport aux actifs totaux; iii) le ratio crédit total sur actifs totaux; iv) les dépôts à terme à 2 ans par rapport aux actifs totaux; et v) le niveau des engagements sous forme de cautions et avals par rapport aux actifs totaux. Les ratios portefeuilles d’effets commerciaux et actifs liquides par rapport aux actifs totaux sont les variables qui expliquent la faillite des banques commerciales, alors que ce sont les dépôts à terme de plus de 2 ans à 10 ans qui sont à l’origine des faillites des banques de développement. Ces faillites ont été considérablement réduites par la création en 1989 de la commission de réglementation bancaire régionale. Dans l’UEMOA, seule la variable affectée au Sénégal semble contribuer positivement sur la probabilité de faire faillite.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
The focus of the paper is the nonparametric estimation of an instrumental regression function P defined by conditional moment restrictions stemming from a structural econometric model : E[Y-P(Z)|W]=0 and involving endogenous variables Y and Z and instruments W. The function P is the solution of an ill-posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyses identification and overidentification of this model and presents asymptotic properties of the estimated nonparametric instrumental regression function.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Resumo:
This paper addresses the issue of estimating semiparametric time series models specified by their conditional mean and conditional variance. We stress the importance of using joint restrictions on the mean and variance. This leads us to take into account the covariance between the mean and the variance and the variance of the variance, that is, the skewness and kurtosis. We establish the direct links between the usual parametric estimation methods, namely, the QMLE, the GMM and the M-estimation. The ususal univariate QMLE is, under non-normality, less efficient than the optimal GMM estimator. However, the bivariate QMLE based on the dependent variable and its square is as efficient as the optimal GMM one. A Monte Carlo analysis confirms the relevance of our approach, in particular, the importance of skewness.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
We examine the relationship between the risk premium on the S&P 500 index return and its conditional variance. We use the SMEGARCH - Semiparametric-Mean EGARCH - model in which the conditional variance process is EGARCH while the conditional mean is an arbitrary function of the conditional variance. For monthly S&P 500 excess returns, the relationship between the two moments that we uncover is nonlinear and nonmonotonic. Moreover, we find considerable persistence in the conditional variance as well as a leverage effect, as documented by others. Moreover, the shape of these relationships seems to be relatively stable over time.
Resumo:
Recent work suggests that the conditional variance of financial returns may exhibit sudden jumps. This paper extends a non-parametric procedure to detect discontinuities in otherwise continuous functions of a random variable developed by Delgado and Hidalgo (1996) to higher conditional moments, in particular the conditional variance. Simulation results show that the procedure provides reasonable estimates of the number and location of jumps. This procedure detects several jumps in the conditional variance of daily returns on the S&P 500 index.
Resumo:
This paper studies the proposition that an inflation bias can arise in a setup where a central banker with asymmetric preferences targets the natural unemployment rate. Preferences are asymmetric in the sense that positive unemployment deviations from the natural rate are weighted more (or less) severely than negative deviations in the central banker's loss function. The bias is proportional to the conditional variance of unemployment. The time-series predictions of the model are evaluated using data from G7 countries. Econometric estimates support the prediction that the conditional variance of unemployment and the rate of inflation are positively related.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
Ce mémoire porte sur la constitution du tiers secteur français en tant qu’acteur social et politique. Dans de nombreux pays, les relations entre l’État et les organismes mutualistes, coopératifs et associatifs de la société civile (un ensemble hétérogène qu’on appelle ici le « tiers secteur ») ont été récemment formalisées par des partenariats. En France, cette institutionnalisation s’est concrétisée en 2001 par la signature d’une Charte (CPCA). Nous explorons l’hypothèse qu’à travers l’institutionnalisation, le tiers secteur français se construit en tant qu’acteur –ayant une (ou des) identités propres de même qu’un projet de société relativement bien défini. La perspective dominante présente dans la littérature internationale traitant de l’institutionnalisation des rapports entre l’État et le tiers secteur est celle d’une instrumentalisation des organisations du tiers secteur au détriment de leurs spécificités et de leur autonomie. Cette perspective nous semble limitative, car elle semble être aveugle à la capacité d’action des organisations. Par conséquent, dans ce mémoire, nous cherchons à comprendre si une transformation identitaire a eu lieu ou est en cours, au sein du tiers secteur français, et donc s’il se transforme en acteur collectif. Pour apporter certains éléments de réponse à nos hypothèses et questions de recherche, nous avons effectué une analyse des discours via deux sources de données; des textes de réflexion rédigés par des acteurs clés du tiers secteur français et des entretiens effectués avec certains d’entre eux au printemps 2003 et à l’automne 2005. Sur la base de deux inspirations théoriques (Hobson et Lindholm, 1997 et Melucci, 1991), notre analyse a été effectuée en deux étapes. Une première phase nous a permis d’identifier deux cadres cognitifs à partir desquels se définissent les acteurs du tiers secteur français, les cadres « association » et « économie solidaire ». Une deuxième phase d’analyse consistait à déterminer si les deux cadres cognitifs pouvaient être considérés comme étant des tensions existant au sein d’un seul et même acteur collectif. Nos résultats nous permettent de conclure que les organisations du tiers secteur français ne se perçoivent pas globalement comme un ensemble unifié. Néanmoins, nous avons pu dégager certains éléments qui démontrent que les cadres sont partiellement conciliables. Cette conciliation est grandement subordonnée aux contextes sociopolitiques et économiques français, européen et international et est également conditionnelle à la découverte d’un mode de fonctionnement convenant à tous les acteurs.
Resumo:
Ever since Sen (1993) criticized the notion of internal consistency of choice, there exists a wide spread perception that the standard rationalizability approach to the theory of choice has difficulties coping with the existence of external social norms. This paper introduces a concept of norm-conditional rationalizability and shows that external social norms can be accommodated so as to be compatible with norm-conditional rationalizability by means of suitably modified revealed preference axioms in the theory of rational choice on general domains à la Richter (1966;1971) and Hansson (1968)