996 resultados para Patrimoine culinaire de Montréal
Resumo:
In this paper, we consider testing marginal normal distributional assumptions. More precisely, we propose tests based on moment conditions implied by normality. These moment conditions are known as the Stein (1972) equations. They coincide with the first class of moment conditions derived by Hansen and Scheinkman (1995) when the random variable of interest is a scalar diffusion. Among other examples, Stein equation implies that the mean of Hermite polynomials is zero. The GMM approach we adopted is well suited for two reasons. It allows us to study in detail the parameter uncertainty problem, i.e., when the tests depend on unknown parameters that have to be estimated. In particular, we characterize the moment conditions that are robust against parameter uncertainty and show that Hermite polynomials are special examples. This is the main contribution of the paper. The second reason for using GMM is that our tests are also valid for time series. In this case, we adopt a Heteroskedastic-Autocorrelation-Consistent approach to estimate the weighting matrix when the dependence of the data is unspecified. We also make a theoretical comparison of our tests with Jarque and Bera (1980) and OPG regression tests of Davidson and MacKinnon (1993). Finite sample properties of our tests are derived through a comprehensive Monte Carlo study. Finally, three applications to GARCH and realized volatility models are presented.
Resumo:
Critical-level generalized-utilitarian population principles with positive critical levels pro-vide an ethically attractive way of avoiding the repugnant conclusion. We discuss the axiomatic foundations of critical-level generalized utilitarianism and investigate its rela-tionship to the sadistic and strong sadistic conclusions. A positive critical level avoids the repugnant conclusion. We demonstrate that, although no critical-level generalized-utilitarian principle can avoid both the repugnant and strong sadistic conclusions, princi-ples that avoid both have significant defects.
Resumo:
We examine the maximal-element rationalizability of choice functions with arbitrary do-mains. While rationality formulated in terms of the choice of greatest elements according to a rationalizing relation has been analyzed relatively thoroughly in the earlier litera-ture, this is not the case for maximal-element rationalizability, except when it coincides with greatest-element rationalizability because of properties imposed on the rationalizing relation. We develop necessary and sufficient conditions for maximal-element rationaliz-ability by itself, and for maximal-element rationalizability in conjunction with additional properties of a rationalizing relation such as re exivity, completeness, P-acyclicity, quasi-transitivity, consistency and transitivity.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
This paper studies testing for a unit root for large n and T panels in which the cross-sectional units are correlated. To model this cross-sectional correlation, we assume that the data is generated by an unknown number of unobservable common factors. We propose unit root tests in this environment and derive their (Gaussian) asymptotic distribution under the null hypothesis of a unit root and local alternatives. We show that these tests have significant asymptotic power when the model has no incidental trends. However, when there are incidental trends in the model and it is necessary to remove heterogeneous deterministic components, we show that these tests have no power against the same local alternatives. Through Monte Carlo simulations, we provide evidence on the finite sample properties of these new tests.
Resumo:
We propose two axiomatic theories of cost sharing with the common premise that agents demand comparable -though perhaps different- commodities and are responsible for their own demand. Under partial responsibility the agents are not responsible for the asymmetries of the cost function: two agents consuming the same amount of output always pay the same price; this holds true under full responsibility only if the cost function is symmetric in all individual demands. If the cost function is additively separable, each agent pays her stand alone cost under full responsibility; this holds true under partial responsibility only if, in addition, the cost function is symmetric. By generalizing Moulin and Shenker’s (1999) Distributivity axiom to cost-sharing methods for heterogeneous goods, we identify in each of our two theories a different serial method. The subsidy-free serial method (Moulin, 1995) is essentially the only distributive method meeting Ranking and Dummy. The cross-subsidizing serial method (Sprumont, 1998) is the only distributive method satisfying Separability and Strong Ranking. Finally, we propose an alternative characterization of the latter method based on a strengthening of Distributivity.
Resumo:
Public policies often involve choices of alternatives in which the size and the composition of the population may vary. Examples are the allocation of resources to prenatal care and the design of aid packages to developing countries. In order to assess the corresponding feasible choices on normative grounds, criteria for social evaluation that are capable of performing variable-population comparisons are required. We review several important axioms for welfarist population principles and discuss the link between individual well-being and the desirability of adding a new person to a given society.
Harsanyi’s Social Aggregation Theorem : A Multi-Profile Approach with Variable-Population Extensions
Resumo:
This paper provides new versions of Harsanyi’s social aggregation theorem that are formulated in terms of prospects rather than lotteries. Strengthening an earlier result, fixed-population ex-ante utilitarianism is characterized in a multi-profile setting with fixed probabilities. In addition, we extend the social aggregation theorem to social-evaluation problems under uncertainty with a variable population and generalize our approach to uncertain alternatives, which consist of compound vectors of probability distributions and prospects.
Resumo:
This paper uses a standard two-period overlapping generation model to examine the behavior of an economy where both intergenerational transfers of time and bequests are available. While bequests have been examined extensively, time transfers have received little or no attention in the literature. Assuming a log-linear utility function and a Cobb-Douglas production function, we derive an explicit solution for the dynamics and show that altruistic intergenerational time transfers can take place in presence of a binding non-negativity constraint on bequests. We also show that with either type of transfers capital is an increasing function of the intergenerational degree of altruism. However, while with time transfers the labor supply of the young increases with the degree of altruism, with bequests it may decrease
Resumo:
In this paper : a) the consumer’s problem is studied over two periods, the second one involving S states, and the consumer being endowed with S+1 incomes and having access to N financial assets; b) the consumer is then representable by a continuously differentiable system of demands, commodity demands, asset demands and desirabilities of incomes (the S+1 Lagrange multiplier of the S+1 constraints); c) the multipliers can be transformed into subjective Arrow prices; d) the effects of the various incomes on these Arrow prices decompose into a compensation effect (an Antonelli matrix) and a wealth effect; e) the Antonelli matrix has rank S-N, the dimension of incompleteness, if the consumer can financially adjust himself when facing income shocks; f) the matrix has rank S, if not; g) in the first case, the matrix represents a residual aversion; in the second case, a fundamental aversion; the difference between them is an aversion to illiquidity; this last relation corresponds to the Drèze-Modigliani decomposition (1972); h) the fundamental aversion decomposes also into an aversion to impatience and a risk aversion; i) the above decompositions span a third decomposition; if there exists a sure asset (to be defined, the usual definition being too specific), the fundamental aversion admits a three-component decomposition, an aversion to impatience, a residual aversion and an aversion to the illiquidity of risky assets; j) the formulas of the corresponding financial premiums are also presented.
Resumo:
This paper derives the ARMA representation of integrated and realized variances when the spot variance depends linearly on two autoregressive factors, i.e., SR SARV(2) models. This class of processes includes affine, GARCH diffusion, CEV models, as well as the eigenfunction stochastic volatility and the positive Ornstein-Uhlenbeck models. We also study the leverage effect case, the relationship between weak GARCH representation of returns and the ARMA representation of realized variances. Finally, various empirical implications of these ARMA representations are considered. We find that it is possible that some parameters of the ARMA representation are negative. Hence, the positiveness of the expected values of integrated or realized variances is not guaranteed. We also find that for some frequencies of observations, the continuous time model parameters may be weakly or not identified through the ARMA representation of realized variances.
Resumo:
This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.