552 resultados para sentiment d’auto-efficacité
Resumo:
Les théories sous-tendant le leadership transformationnel suggèrent que la congruence des valeurs personnelles et organisationnelles est au cœur du fonctionnement du leadership transformationnel. Or, l’examen de cette proposition soulève certaines questions. Par exemple, il y a lieu de s’interroger quant à l’importance du type (subjective ou objective) et de la cible (équipe, organisation) de congruence considérés, du contenu des valeurs utilisées pour juger de la congruence, des contingences situationnelles modulant l’importance de la congruence, et du rôle de la congruence des valeurs des leaders eux-mêmes. Ainsi, afin d’enrichir les connaissances sur le rôle des valeurs en regard du leadership transformationnel, cette thèse propose trois articles dans lesquels ce rôle des valeurs et de leur congruence est abordé sous trois angles. Les données utilisées dans cette thèse proviennent d’une grande organisation canadienne, et des sous-ensembles de données sont créés pour répondre aux objectifs de chaque article. Le premier article s’intéresse aux liens qu’ont a) les valeurs personnelles des gestionnaires, b) les valeurs qu’ils perçoivent dans leur organisation et c) la congruence de ces deux ensemble de valeurs avec l’émission de comportements de leadership transformationnel tel que perçus par leurs subordonnés. Les résultats suggèrent que la congruence des valeurs n’a pas de lien avec le leadership transformationnel, mais que c’est le cas pour certaines valeurs au niveau personnel et organisationnel qui présentent effectivement un lien. Le deuxième article porte sur le potentiel rôle modérateur de la congruence des valeurs personne-organisation des subordonnés dans la relation entre le leadership transformationnel et les comportements d’habilitation. Les résultats montrent que la congruence des valeurs peut effectivement modérer cette relation, et que la forme de la modération peut dépendre de l’ancienneté des employés. Le troisième article traite du rôle modérateur de la présence de valeurs et de leur congruence au niveau de l’équipe dans la relation entre le leadership transformationnel et les comportements d’habilitation. Les résultats suggèrent que les valeurs et leur congruence dans les équipes peuvent modérer l’efficacité du leadership transformationnel en regard des comportements d’habilitation. De façon générale, la présence et la congruence de cinq valeurs, parmi les sept testées, semblent rehausser la relation entre le leadership transformationnel et les comportements d’habilitation. Ainsi, la présente thèse, en ajoutant des considérations quant aux questions qui avaient été soulevées par l’examen de la proposition théorique du rôle des valeurs et de leur congruence dans le leadership transformationnel, permet d’améliorer la compréhension de ce rôle. Spécifiquement, les résultats de cette thèse suggèrent que globalement, la congruence des valeurs peut être plus importante pour l’efficacité du leadership transformationnel lorsque les valeurs considérées sont plus importantes dans l’équipe de l’individu, et lorsque l’individu a peu d’ancienneté dans son organisation. De plus, en ce qui a trait aux leaders, il semble que la présence de valeurs de bien-être collectif et d’ouverture au changement ait un lien avec l’émission de comportements de leadership transformationnel. Une discussion traite de ces résultats et indique les limites de la thèse ainsi que des pistes de recherche future.
Resumo:
Rapport d'analyse d'intervention présenté à la Faculté des arts et sciences en vue de l'obtention du grade de Maîtrise ès sciences (M. Sc.) en psychoéducation.
Resumo:
Moulin (1999) characterizes the fixed-path rationing methods by efficiency, strategy-proofness, consistency, and resource-monotonicity. In this note, we give a straightforward proof of his result.
Resumo:
We study the problem of locating two public goods for a group of agents with single-peaked preferences over an interval. An alternative specifies a location for each public good. In Miyagawa (1998), each agent consumes only his most preferred public good without rivalry. We extend preferences lexicographically and characterize the class of single-peaked preference rules by Pareto-optimality and replacement-domination. This result is considerably different from the corresponding characterization by Miyagawa (2001a).
Resumo:
We consider a probabilistic approach to the problem of assigning k indivisible identical objects to a set of agents with single-peaked preferences. Using the ordinal extension of preferences, we characterize the class of uniform probabilistic rules by Pareto efficiency, strategy-proofness, and no-envy. We also show that in this characterization no-envy cannot be replaced by anonymity. When agents are strictly risk averse von-Neumann-Morgenstern utility maximizers, then we reduce the problem of assigning k identical objects to a problem of allocating the amount k of an infinitely divisible commodity.
Resumo:
We analyze collective choice procedures with respect to their rationalizability by means of profiles of individual preference orderings. A selection function is a generalization of a choice function where selected alternatives may depend on a reference (or status quo) alternative in addition to the set of feasible options. Given the number of agents n, a selection function satisfies efficient and non-deteriorating n-rationalizability if there exists a profile of n orderings on the universal set of alternatives such that the selected alternatives are (i) efficient for that profile, and (ii) at least as good as the reference option according to each individual preference. We analyze efficient and non-deteriorating collective choice in a general abstract framework and provide a characterization result given a universal set domain.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the 1976-1992 period. We also test a conditional APT model by using the difference between the 30-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from a total of 25 securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be crucial for the appropriate pricing of the portfolios.
Resumo:
In a linear production model, we characterize the class of efficient and strategy-proof allocation functions, and the class of efficient and coalition strategy-proof allocation functions. In the former class, requiring equal treatment of equals allows us to identify a unique allocation function. This function is also the unique member of the latter class which satisfies uniform treatment of uniforms.
Resumo:
We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).
Resumo:
This paper examines empirically the effects of distortionary taxation on labor supply using a general equilibrium framework. The long-term relations predicted by the model are derived and tested using Canadian data between 1966 and 1993. While the cointegrating predictions of the model without taxation are rejected, the ones of the model with labor taxation are not. Persistent labor tax rate increases appear to play an important role in the observed downward trend in hours worked.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.
Resumo:
In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.