565 resultados para Généralités
em Université de Montréal, Canada
Resumo:
Cette thèse consiste en une étude comparative du métier d’intendant au Canada et dans les généralités de Bretagne et de Tours dans la première moitié du 18e siècle (1700-1750). Elle s’appuie sur l’intendant pour s’interroger sur l’existence de spécificités dans l’exercice du pouvoir en contexte colonial par rapport au contexte métropolitain. Considéré par la plupart des historiens de la France d’Ancien Régime comme le personnage clé de l’évolution politique qui aurait fait passer la monarchie de sa phase judiciaire jusqu’à sa phase dite « administrative », l’intendant de justice, police et finance ou commissaire départi est au coeur des débats sur l’absolutisme et son rôle de première ligne dans l’oeuvre de centralisation monarchique en fait le sujet idéal pour observer la portée réelle de ce régime sur le terrain. L’examen du fonctionnement de l’intendance est un préalable obligé pour qui veut comprendre les rapports entre administrateurs et administrés et mieux cerner la capacité de régulation de l’État. Dans le cadre des attributions définies par sa commission, quelles sont les tâches qui l’occupent concrètement ? Cette thèse s’intéresse à l’intendant du point de vue de sa pratique, en s’appuyant sur la description interne des sources produites par l’intendant pour décortiquer ses mécanismes d’intervention. Deux types de documents sont analysés successivement, soit la correspondance, incluant les pièces jointes et les documents de travail, et les actes de portée réglementaire, incluant les ordonnances et les arrêts du Conseil d’État. Chemin faisant, nous avons fait la rencontre des individus et groupes qui sollicitent l’intervention de l’intendant, levant le voile sur les rapports de pouvoir et les interactions qui le lient à ses supérieurs, aux justiciables et aux institutions locales. L’exercice permet de poser en des termes nouveaux l’action de ce personnage dont on connaissait les attributions et principales décisions, mais beaucoup moins leur logique sous-jacente.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
L’objectif de ce papier est de déterminer les facteurs susceptibles d’expliquer les faillites bancaires au sein de l’Union économique et monétaire ouest-africaine (UEMOA) entre 1980 et 1995. Utilisant le modèle logit conditionnel sur des données en panel, nos résultats montrent que les variables qui affectent positivement la probabilité de faire faillite des banques sont : i) le niveau d’endettement auprès de la banque centrale; ii) un faible niveau de comptes disponibles et à vue; iii) les portefeuilles d’effets commerciaux par rapport au total des crédits; iv) le faible montant des dépôts à terme de plus de 2 ans à 10 ans par rapport aux actifs totaux; et v) le ratio actifs liquides sur actifs totaux. En revanche, les variables qui contribuent positivement sur la vraisemblance de survie des banques sont les suivantes : i) le ratio capital sur actifs totaux; ii) les bénéfices nets par rapport aux actifs totaux; iii) le ratio crédit total sur actifs totaux; iv) les dépôts à terme à 2 ans par rapport aux actifs totaux; et v) le niveau des engagements sous forme de cautions et avals par rapport aux actifs totaux. Les ratios portefeuilles d’effets commerciaux et actifs liquides par rapport aux actifs totaux sont les variables qui expliquent la faillite des banques commerciales, alors que ce sont les dépôts à terme de plus de 2 ans à 10 ans qui sont à l’origine des faillites des banques de développement. Ces faillites ont été considérablement réduites par la création en 1989 de la commission de réglementation bancaire régionale. Dans l’UEMOA, seule la variable affectée au Sénégal semble contribuer positivement sur la probabilité de faire faillite.
Resumo:
This paper studies seemingly unrelated linear models with integrated regressors and stationary errors. By adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by feasible generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. Simulation results suggest that this new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of purchasing power parity (PPP) among the G-7 countries. The test based on the efficient estimates rejects the PPP hypothesis for most countries.
Resumo:
Modern business cycle theory involves developing models that explain stylized facts. For this strategy to be successful, these facts should be well established. In this paper, we focus on the stylized facts of international business cycles. We use the generalized method of moments and quarterly data from nineteen industrialized countries to estimate pairwise cross-country and within-country correlations of macroeconomic aggregates. We calculate standard errors of the statistics for our unique panel of data and test hypotheses about the relative sizes of these correlations. We find a lower cross-country correlation of all aggregates and especially of consumption than in previous studies. The cross-country correlations of consumption, output and Solow residuals are not significantly different from one another over the whole sample, but there are significant differences in the post-1973 subsample.
Resumo:
Multi-country models have not been very successful in replicating important features of the international transmission of business cycles. Standard models predict cross-country correlations of output and consumption which are respectively too low and too high. In this paper, we build a multi-country model of the business cycle with multiple sectors in order to analyze the role of sectoral shocks in the international transmission of the business cycle. We find that a model with multiple sectors generates a higher cross-country correlation of output than standard one-sector models, and a lower cross-country correlation of consumption. In addition, it predicts cross-country correlations of employment and investment that are closer to the data than the standard model. We also analyze the relative effects of multiple sectors, trade in intermediate goods, imperfect substitution between domestic and foreign goods, home preference, capital adjustment costs, and capital depreciation on the international transmission of the business cycle.
Resumo:
We provide a characterization of selection correspondences in two-person exchange economies that can be core rationalized in the sense that there exists a preference profile with some standard properties that generates the observed choices as the set of core elements of the economy for any given initial endowment vector. The approach followed in this paper deviates from the standard rational choice model in that a rationalization in terms of a profile of individual orderings rather than in terms of a single individual or social preference relation is analyzed.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
Dans ce texte, nous revoyons certains développements récents de l’économétrie qui peuvent être intéressants pour des chercheurs dans des domaines autres que l’économie et nous soulignons l’éclairage particulier que l’économétrie peut jeter sur certains thèmes généraux de méthodologie et de philosophie des sciences, tels la falsifiabilité comme critère du caractère scientifique d’une théorie (Popper), la sous-détermination des théories par les données (Quine) et l’instrumentalisme. En particulier, nous soulignons le contraste entre deux styles de modélisation - l’approche parcimonieuse et l’approche statistico-descriptive - et nous discutons les liens entre la théorie des tests statistiques et la philosophie des sciences.
Resumo:
We analyze an alternative to the standard rationalizability requirement for observed choices by considering non-deteriorating selections. A selection function is a generalization of a choice function where selected alternatives may depend on a reference (or status quo) alternative in addition to the set of feasible options. A selection function is non-deteriorating if there exists an ordering over the universal set of alternatives such that the selected alternatives are at least as good as the reference option. We characterize non-deteriorating selection functions in an abstract framework and in an economic environment.
Resumo:
We provide a survey of the literature on ranking sets of objects. The interpretations of those set rankings include those employed in the theory of choice under complete uncertainty, rankings of opportunity sets, set rankings that appear in matching theory, and the structure of assembly preferences. The survey is prepared for the Handbook of Utility Theory, vol. 2, edited by Salvador Barberà, Peter Hammond, and Christian Seidl, to be published by Kluwer Academic Publishers. The chapter number is provisional.
Resumo:
This paper examines several families of population principles in the light of a set of axioms. In addition to the critical-level utilitarian, number-sensitive critical-level utilitarian and number-dampened families and their generalized counterparts, we consider the restricted number-dampened family (suggested by Hurka) and introduce two new families : the restricted critical-level and restricted number-dependent critical-level families. Subsets of the restricted families have nonnegative critical levels and avoid both the repugnant and sadistic conclusions but fail to satisfy an important independence condition. We defend the critical-level principles with positive critical levels.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.