612 resultados para Problèmes de comportement
Resumo:
This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.
Resumo:
Traduit de l'anglais par Jimmy Légaré et Olivier Paradis (Direction des bibliothèques de l'UdeM).
Resumo:
Dans ce texte, nous revoyons certains développements récents de l’économétrie qui peuvent être intéressants pour des chercheurs dans des domaines autres que l’économie et nous soulignons l’éclairage particulier que l’économétrie peut jeter sur certains thèmes généraux de méthodologie et de philosophie des sciences, tels la falsifiabilité comme critère du caractère scientifique d’une théorie (Popper), la sous-détermination des théories par les données (Quine) et l’instrumentalisme. En particulier, nous soulignons le contraste entre deux styles de modélisation - l’approche parcimonieuse et l’approche statistico-descriptive - et nous discutons les liens entre la théorie des tests statistiques et la philosophie des sciences.
Resumo:
We provide a survey of the literature on ranking sets of objects. The interpretations of those set rankings include those employed in the theory of choice under complete uncertainty, rankings of opportunity sets, set rankings that appear in matching theory, and the structure of assembly preferences. The survey is prepared for the Handbook of Utility Theory, vol. 2, edited by Salvador Barberà, Peter Hammond, and Christian Seidl, to be published by Kluwer Academic Publishers. The chapter number is provisional.
Resumo:
This paper studies monetary policy in an economy where the central banker's preferences are asymmetric around optimal inflation. In particular, positive deviations from the optimum can be weighted more, or less, severely than negative deviations in the policy maker's loss function. It is shown that under asymmetric preferences, uncertainty can induce a prudent behavior on the part of the central banker. Since the prudence motive can be large enough to override the inflation bias, optimal monetary policy could be implemented even in the absence of rules, reputation, or contractual mechanisms. For certain parameter values, a deflationary bias can arise in equilibrium.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.
Resumo:
L’Islam interdit le riba , mot arabe signifiant à la fois usure et intérêt. L’interdiction du rib figure dans la loi islamique, née dans l’Arabie du Moyen Âge. Elle est à la base de la finance islamique qui connut une expansion remarquable durant la deuxième moitié du XX e siècle. Nous nous interrogeons sur les origines de cette interdiction, sur les problèmes que connaît actuellement la finance islamique et sur ses perspectives d’avenir.
Resumo:
This paper proposes an explanation for why efficient reforms are not carried out when losers have the power to block their implementation, even though compensating them is feasible. We construct a signaling model with two-sided incomplete information in which a government faces the task of sequentially implementing two reforms by bargaining with interest groups. The organization of interest groups is endogenous. Compensations are distortionary and government types differ in the concern about distortions. We show that, when compensations are allowed to be informative about the government’s type, there is a bias against the payment of compensations and the implementation of reforms. This is because paying high compensations today provides incentives for some interest groups to organize and oppose subsequent reforms with the only purpose of receiving a transfer. By paying lower compensations, governments attempt to prevent such interest groups from organizing. However, this comes at the cost of reforms being blocked by interest groups with relatively high losses.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Resumo:
A full understanding of public affairs requires the ability to distinguish between the policies that voters would like the government to adopt, and the influence that different voters or group of voters actually exert in the democratic process. We consider the properties of a computable equilibrium model of a competitive political economy in which the economic interests of groups of voters and their effective influence on equilibrium policy outcomes can be explicitly distinguished and computed. The model incorporates an amended version of the GEMTAP tax model, and is calibrated to data for the United States for 1973 and 1983. Emphasis is placed on how the aggregation of GEMTAP households into groups within which economic and political behaviour is assumed homogeneous affects the numerical representation of interests and influence for representative members of each group. Experiments with the model suggest that the changes in both interests and influence are important parts of the story behind the evolution of U.S. tax policy in the decade after 1973.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.
Resumo:
We reconsider the discrete version of the axiomatic cost-sharing model. We propose a condition of (informational) coherence requiring that not all informational refinements of a given problem be solved differently from the original problem. We prove that strictly coherent linear cost-sharing rules must be simple random-order rules.
Resumo:
This article studies mobility patterns of German workers in light of a model of sector-specific human capital. Furthermore, I employ and describe little-used data on continuous on-the-job training occurring after apprenticeships. Results are presented describing the incidence and duration of continuous training. Continuous training is quite common, despite the high incidence of apprenticeships which precedes this part of a worker's career. Most previous studies have only distinguished between firm-specific and general human capital, usually concluding that training was general. Inconsistent with those conclusions, I show that German men are more likely to find a job within the same sector if they have received continuous training in that sector. These results are similar to those obtained for young U.S. workers, and suggest that sector-specific capital is an important feature of very different labor markets. In addition, they suggest that the observed effect of training on mobility is sensible to the state of the business cycle, indicating a more complex interaction between supply and demand that most theoretical models allow for.