993 resultados para Lipoprotéines de faible densité
Resumo:
L’objectif de ce papier est de déterminer les facteurs susceptibles d’expliquer les faillites bancaires au sein de l’Union économique et monétaire ouest-africaine (UEMOA) entre 1980 et 1995. Utilisant le modèle logit conditionnel sur des données en panel, nos résultats montrent que les variables qui affectent positivement la probabilité de faire faillite des banques sont : i) le niveau d’endettement auprès de la banque centrale; ii) un faible niveau de comptes disponibles et à vue; iii) les portefeuilles d’effets commerciaux par rapport au total des crédits; iv) le faible montant des dépôts à terme de plus de 2 ans à 10 ans par rapport aux actifs totaux; et v) le ratio actifs liquides sur actifs totaux. En revanche, les variables qui contribuent positivement sur la vraisemblance de survie des banques sont les suivantes : i) le ratio capital sur actifs totaux; ii) les bénéfices nets par rapport aux actifs totaux; iii) le ratio crédit total sur actifs totaux; iv) les dépôts à terme à 2 ans par rapport aux actifs totaux; et v) le niveau des engagements sous forme de cautions et avals par rapport aux actifs totaux. Les ratios portefeuilles d’effets commerciaux et actifs liquides par rapport aux actifs totaux sont les variables qui expliquent la faillite des banques commerciales, alors que ce sont les dépôts à terme de plus de 2 ans à 10 ans qui sont à l’origine des faillites des banques de développement. Ces faillites ont été considérablement réduites par la création en 1989 de la commission de réglementation bancaire régionale. Dans l’UEMOA, seule la variable affectée au Sénégal semble contribuer positivement sur la probabilité de faire faillite.
Resumo:
Multi-country models have not been very successful in replicating important features of the international transmission of business cycles. Standard models predict cross-country correlations of output and consumption which are respectively too low and too high. In this paper, we build a multi-country model of the business cycle with multiple sectors in order to analyze the role of sectoral shocks in the international transmission of the business cycle. We find that a model with multiple sectors generates a higher cross-country correlation of output than standard one-sector models, and a lower cross-country correlation of consumption. In addition, it predicts cross-country correlations of employment and investment that are closer to the data than the standard model. We also analyze the relative effects of multiple sectors, trade in intermediate goods, imperfect substitution between domestic and foreign goods, home preference, capital adjustment costs, and capital depreciation on the international transmission of the business cycle.
Resumo:
This paper develops and estimates a game-theoretical model of inflation targeting where the central banker's preferences are asymmetric around the targeted rate. In particular, positive deviations from the target can be weighted more, or less, severely than negative ones in the central banker's loss function. It is shown that some of the previous results derived under the assumption of symmetry are not robust to the generalization of preferences. Estimates of the central banker's preference parameters for Canada, Sweden, and the United Kingdom are statistically different from the ones implied by the commonly used quadratic loss function. Econometric results are robust to different forecasting models for the rate of unemployment but not to the use of measures of inflation broader than the one targeted.
Resumo:
This paper studies monetary policy in an economy where the central banker's preferences are asymmetric around optimal inflation. In particular, positive deviations from the optimum can be weighted more, or less, severely than negative deviations in the policy maker's loss function. It is shown that under asymmetric preferences, uncertainty can induce a prudent behavior on the part of the central banker. Since the prudence motive can be large enough to override the inflation bias, optimal monetary policy could be implemented even in the absence of rules, reputation, or contractual mechanisms. For certain parameter values, a deflationary bias can arise in equilibrium.
Resumo:
In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.
Resumo:
Suzumura shows that a binary relation has a weak order extension if and only if it is consistent. However, consistency is demonstrably not sufficient to extend an upper semi-continuous binary relation to an upper semicontinuous weak order. Jaffray proves that any asymmetric (or reflexive), transitive and upper semicontinuous binary relation has an upper semicontinuous strict (or weak) order extension. We provide sufficient conditions for existence of upper semicontinuous extensions of consistence rather than transitive relations. For asymmetric relations, consistency and upper semicontinuity suffice. For more general relations, we prove one theorem using a further consistency property and another with an additional continuity requirement.
Resumo:
This paper characterizes welfarist social evaluation in a multi-profile setting where, in addition to multiple utility profiles, it is assumed that there are several profiles of non-welfare information. We prove new versions of the welfarism theorems in this alternative framework, and we illustrate that a very plausible and weak anonymity property is sufficient to generate anonymous social-evaluation orderings.
Resumo:
We extend the class of M-tests for a unit root analyzed by Perron and Ng (1996) and Ng and Perron (1997) to the case where a change in the trend function is allowed to occur at an unknown time. These tests M(GLS) adopt the GLS detrending approach of Dufour and King (1991) and Elliott, Rothenberg and Stock (1996) (ERS). Following Perron (1989), we consider two models : one allowing for a change in slope and the other for both a change in intercept and slope. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal tests PT(GLS) suggested by ERS. The asymptotic critical values of the tests are tabulated. Also, we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. We show that the M(GLS) and PT(GLS) tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. An empirical application is also provided.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.
Resumo:
This paper derives the ARMA representation of integrated and realized variances when the spot variance depends linearly on two autoregressive factors, i.e., SR SARV(2) models. This class of processes includes affine, GARCH diffusion, CEV models, as well as the eigenfunction stochastic volatility and the positive Ornstein-Uhlenbeck models. We also study the leverage effect case, the relationship between weak GARCH representation of returns and the ARMA representation of realized variances. Finally, various empirical implications of these ARMA representations are considered. We find that it is possible that some parameters of the ARMA representation are negative. Hence, the positiveness of the expected values of integrated or realized variances is not guaranteed. We also find that for some frequencies of observations, the continuous time model parameters may be weakly or not identified through the ARMA representation of realized variances.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
Ecole polytechnique de Montréal, département de mathématiques, André Fortin, et Pierre Carreau du département de génie chimique
Resumo:
FRANCAIS: L'observation d'une intense luminescence dans les super-réseaux de Si/SiO2 a ouvert de nouvelles avenues en recherche théorique des matériaux à base de silicium, pour des applications éventuelles en optoélectronique. Le silicium dans sa phase cristalline possède un gap indirect, le rendant ainsi moins intéressant vis-à-vis d'autres matériaux luminescents. Concevoir des matériaux luminescents à base de silicium ouvrira donc la voie sur de multiples applications. Ce travail fait état de trois contributions au domaine. Premièrement, différents modèles de super-réseaux de Si/SiO2 ont été conçus et étudiés à l'aide de calculs ab initio afin d'en évaluer les propriétés structurales, électroniques et optiques. Les deux premiers modèles dérivés des structures cristallines du silicium et du dioxyde de silicium ont permis de démontrer l'importance du rôle de l'interface Si/SiO2 sur les propriétés optiques. De nouveaux modèles structurellement relaxés ont alors été construits afin de mieux caractériser les interfaces et ainsi mieux évaluer la portée du confinement sur les propriétés optiques. Deuxièmement, un gap direct dans les modèles structurellement relaxés a été obtenu. Le calcul de l'absorption (par l'application de la règle d'or de Fermi) a permis de confirmer que les propriétés d'absorption (et d'émission) du silicium cristallin sont améliorées lorsque celui-ci est confiné par le SiO2. Un décalage vers le bleu avec accroissement du confinement a aussi été observé. Une étude détaillée du rôle des atomes sous-oxydés aux interfaces a de plus été menée. Ces atomes ont le double effet d'accroître légèrement le gap d'énergie et d'aplanir la structure électronique près du niveau de Fermi. Troisièmement, une application directe de la théorique des transitions de Slater, une approche issue de la théorie de la fonctionnelle de la densité pour des ensembles, a été déterminée pour le silicium cristallin puis comparée aux mesures d'absorption par rayons X. Une très bonne correspondance entre cette théorie et l'expérience est observée. Ces calculs ont été appliqués aux super-réseaux afin d'estimer et caractériser leurs propriétés électroniques dans la zone de confinement, dans les bandes de conduction.
Resumo:
Un résumé en anglais est également disponible.