930 resultados para Discrete Choice Model
Resumo:
In this paper we challenge the conventional view that strikes are caused by asymmetric information regarding firm protability such that union members are uninformed. Instead, we build an expressive model of strikes where the perception of unfairness provides the expressive benefit of voting for a strike. The model predicts that larger union size increases both wage offers and the incidence of strikes. Furthermore, while asymmetric information is still important in causing strikes, we find that it is the employer who is not fully informed about the level of emotionality within the union, thereby contributing to strike incidence. An empirical test using UK data provides support for the predictions. In particular, union size has a positive effect on the incidence of strikes and other industrial actions even when asymmetric information regarding protability is controlled for.
Resumo:
We derive a rational model of separable consumer choice which can also serve as a behavioral model. The central construct is [lambda] , the marginal utility of money, derived from the consumer's rest-of-life problem. We present a robust approximation of [lambda], and show how to incorporate liquidity constraints, indivisibilities and adaptation to a changing environment. We fi nd connections with numerous historical and recent constructs, both behavioral and neoclassical, and draw contrasts with standard partial equilibrium analysis. The result is a better grounded, more flexible and more intuitive description of consumer choice.
Resumo:
In this paper we study decision making in situations where the individual’s preferences are not assumed to be complete. First, we identify conditions that are necessary and sufficient for choice behavior in general domains to be consistent with maximization of a possibly incomplete preference relation. In this model of maximally dominant choice, the agent defers/avoids choosing at those and only those menus where a most preferred option does not exist. This allows for simple explanations of conflict-induced deferral and choice overload. It also suggests a criterion for distinguishing between indifference and incomparability based on observable data. A simple extension of this model also incorporates decision costs and provides a theoretical framework that is compatible with the experimental design that we propose to elicit possibly incomplete preferences in the lab. The design builds on the introduction of monetary costs that induce choice of a most preferred feasible option if one exists and deferral otherwise. Based on this design we found evidence suggesting that a quarter of the subjects in our study had incomplete preferences, and that these made significantly more consistent choices than a group of subjects who were forced to choose. The latter effect, however, is mitigated once data on indifferences are accounted for.
Resumo:
In this paper we analyze productivity and welfare losses from capital misallocation in a general equilibrium model of occupational choice and endogenous financial intermediation. We study the effects of borrowing and lending, insurance, and risk sharing on the optimal allocation of resources. We find that financial markets together with general equilibrium effects have large impact on entrepreneurs' entry and firm-size decisions. Efficiency gains are increasing in the quality of financial markets, particularly in their ability to alleviate a financing constraint by providing insurance against idiosyncratic risk.
Resumo:
This article provides a theoretical and empirical analysis of a firm's optimal R&D strategy choice. In this paper a firm's R&D strategy is assumed to be endogenous and allowed to depend on both internal firms. characteristics and external factors. Firms choose between two strategies, either they engage in R&D or abstain from own R&D and imitate the outcomes of innovators. In the theoretical model this yields three types of equilibria in which either all firms innovate, some firms innovate and others imitate, or no firm innovates. Firms'equilibrium strategies crucially depend on external factors. We find that the efficiency of intellectual property rights protection positively affects firms'incentives to engage in R&D, while competitive pressure has a negative effect. In addition, smaller firms are found to be more likely to become imitators when the product is homogeneous and the level of spillovers is high. These results are supported by empirical evidence for German .rms from manufacturing and services sectors. Regarding social welfare our results indicate that strengthening intellectual property protection can have an ambiguous effect. In markets characterized by a high rate of innovation a reduction of intellectual property rights protection can discourage innovative performance substantially. However, a reduction of patent protection can also increase social welfare because it may induce imitation. This indicates that policy issues such as the optimal length and breadth of patent protection cannot be resolved without taking into account specific market and firm characteristics. Journal of Economic Literature Classification Numbers: C35, D43, L13, L22, O31. Keywords: Innovation; imitation; spillovers; product differentiation; market competition; intellectual property rights protection.
Resumo:
In this paper, we present a stochastic model for disability insurance contracts. The model is based on a discrete time non-homogeneous semi-Markov process (DTNHSMP) to which the backward recurrence time process is introduced. This permits a more exhaustive study of disability evolution and a more efficient approach to the duration problem. The use of semi-Markov reward processes facilitates the possibility of deriving equations of the prospective and retrospective mathematical reserves. The model is applied to a sample of contracts drawn at random from a mutual insurance company.
Resumo:
Projecte de recerca elaborat a partir d’una estada a l’Snider Entrepreneurial Research Center de la Wharton School de la University of Pennsilvanya y, EUA entre juliol i desembre del 2007. L’objectiu d’aquest projecte és estudiar la relació entre les estratègies de gestió del coneixement i les tecnologies de la informació i la comunicació (TIC) en l’evolució de les poblacions d’organitzacions i els seus efectes en els patrons industrials d’aglomeració espacial. Per a això s’adopta una aproximació fonamentada en la utilització d'un model basats en agents per a obtenir hipòtesis significatives i provables sobre l’evolució de les poblacions d’organitzacions al si de clústers geogràfics. El model de simulació incorpora les perspectives i supòsits d’un marc conceptual, l’Espai de la Informació o I-Space. Això permet una conceptualització basada en la informació de l’entorn econòmic que té en compte les seves dimensions espacials i temporals. Mitjançant els paràmetres del model es dóna la possibilitat d’assignar estratègies específiques de gestió del coneixement als diversos agents i de localitzar-los en una posició de l’espai físic. La simulació mostra com l'adopció d'estratègies diverses pel que fa a la gestió del coneixement influeix en l'evolució de les organitzacions i de la seva localització espacial, i que aquesta evolució es veu modificada pel desenvolupament de les TIC. A través de la modelització de dos casos ben coneguts de clústers geogràfics d’alta tecnologia, com són Silicon Valley a Califòrnia i la Route 128 als voltants de Boston, s’estudia la interrelació entre les estratègies de gestió del coneixement adoptades per les empreses i la seva tria de localització espacial, i també com això és afectat per l’evolució de les tecnologies de la informació i de la comunicació (TIC). Els resultats obtinguts generen una sèrie d’hipòtesis de rica potencialitat sobre l’impacte del desenvolupament de les TIC en la dinàmica d’aquests clusters geogràfics. Concretament, es troba que la estructuració del coneixement i l’aglomeració espacial co-evolucionen i que aquesta coevolució es veu significativament alterada pel desenvolupament de les TIC.
Resumo:
An alternative model for the geodynamic evolution of Southeast Asia is proposed and inserted in a modern plate tectonic model. The reconstruction methodology is based on dynamic plate boundaries, constrained by data such as spreading rates and subduction velocities; in this way it differs from classical continental drift models proposed so far. The different interpretations about the location of the Palaeotethys suture in Thailand are revised, the Tertiary Mae Yuam fault is seen as the emplacement of the suture. East of the suture we identify an Indochina derived terrane for which we keep the name Shan-Thai, formerly used to identify the Cimmerian block present in Southeast Asia, now called Sibumasu. This nomenclatural choice was made on the basis of the geographic location of the terrane (Eastern Shan States in Burma and Central Thailand) and in order not to introduce new confusing terminology. The closure of the Eastern Palaeotethys is related to a southward subduction of the ocean, that triggered the Eastern Neotethys to open as a back-arc, due to the presence of Late Carboniferous-Early Permian arc magmatism in Mergui (Burma) and in the Lhasa block (South Tibet), and to the absence of arc magmatism of the same age East of the suture. In order to explain the presence of Carboniferous-Early Permian and Permo-Triassic volcanic arcs in Cambodia, Upper Triassic magmatism in Eastern Vietnam and Lower Permian-Middle Permian arc volcanites in Western Sumatra, we introduce the Orang Laut terranes concept. These terranes were detached from Indochina and South China during back-arc opening of the Poko-Song Ma system, due to the westward subduction of the Palaeopacific. This also explains the location of the Cathaysian West Sumatra block to the West of the Cimmerian Sibumasu block.
Resumo:
Every year, debris flows cause huge damage in mountainous areas. Due to population pressure in hazardous zones, the socio-economic impact is much higher than in the past. Therefore, the development of indicative susceptibility hazard maps is of primary importance, particularly in developing countries. However, the complexity of the phenomenon and the variability of local controlling factors limit the use of processbased models for a first assessment. A debris flow model has been developed for regional susceptibility assessments using digital elevation model (DEM) with a GIS-based approach.. The automatic identification of source areas and the estimation of debris flow spreading, based on GIS tools, provide a substantial basis for a preliminary susceptibility assessment at a regional scale. One of the main advantages of this model is its workability. In fact, everything is open to the user, from the data choice to the selection of the algorithms and their parameters. The Flow-R model was tested in three different contexts: two in Switzerland and one in Pakistan, for indicative susceptibility hazard mapping. It was shown that the quality of the DEM is the most important parameter to obtain reliable results for propagation, but also to identify the potential debris flows sources.
Resumo:
The article is intended to improve our understanding of the reasons underlying the intellectual migration of scientists from well-known cognitive domains to nascent scientific fields. To that purpose we present, first, a number of findings from the sociology of science that give different insights about this phenomenon. We then attempt to bring some of these insights together under the conceptual roof of an actor-based approach linking expected utility and diffusion theory. Intellectual migration is regarded as the rational choice of scientists who decide under uncertainty and on the base of a number of decision-making variables, which define probabilities, costs, and benefits of the migration.
Resumo:
This paper examines a dataset which is modeled well by thePoisson-Log Normal process and by this process mixed with LogNormal data, which are both turned into compositions. Thisgenerates compositional data that has zeros without any need forconditional models or assuming that there is missing or censoreddata that needs adjustment. It also enables us to model dependenceon covariates and within the composition
Resumo:
Female mate choice influences the maintenance of genetic variation by altering the mating success of males with different genotypes. The evolution of preferences themselves, on the other hand, depends on genetic variation present in the population. Few models have tracked this feedback between a choice gene and its effects on genetic variation, in particular when genes that determine offspring viability and attractiveness have dominance effects. Here we build a population genetic model that allows comparing the evolution of various choice rules in a single framework. We first consider preferences for good genes and show that focused preferences for homozygotes evolve more easily than broad preferences, which allow heterozygous males high mating success too. This occurs despite better maintenance of genetic diversity in the latter scenario, and we discuss why empirical findings of superior mating success of heterozygous males consequently do not immediately lead to a better understanding of the lek paradox. Our results thus suggest that the mechanisms that help maintain genetic diversity also have a flipside of making female choice an inaccurate means of producing the desired kind of offspring. We then consider preferences for heterozygosity per se, and show that these evolve only under very special conditions. Choice for compatible genotypes can evolve but its selective advantage diminishes quickly due to frequency-dependent selection. Finally, we show that our model reproduces earlier results on selfing, when the female choice strategy produces assortative mating. Overall, our model indicates that various forms of heterozygote-favouring (or variable) female choice pose a problem for the theory of sexual ornamentation based on indirect benefits, rather than a solution.
Resumo:
This paper studies the limits of discrete time repeated games with public monitoring. We solve and characterize the Abreu, Milgrom and Pearce (1991) problem. We found that for the "bad" ("good") news model the lower (higher) magnitude events suggest cooperation, i.e., zero punishment probability, while the highrt (lower) magnitude events suggest defection, i.e., punishment with probability one. Public correlation is used to connect these two sets of signals and to make the enforceability to bind. The dynamic and limit behavior of the punishment probabilities for variations in ... (the discount rate) and ... (the time interval) are characterized, as well as the limit payo¤s for all these scenarios (We also introduce uncertainty in the time domain). The obtained ... limits are to the best of my knowledge, new. The obtained ... limits coincide with Fudenberg and Levine (2007) and Fudenberg and Olszewski (2011), with the exception that we clearly state the precise informational conditions that cause the limit to converge from above, to converge from below or to degenerate. JEL: C73, D82, D86. KEYWORDS: Repeated Games, Frequent Monitoring, Random Pub- lic Monitoring, Moral Hazard, Stochastic Processes.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
We developed a procedure that combines three complementary computational methodologies to improve the theoretical description of the electronic structure of nickel oxide. The starting point is a Car-Parrinello molecular dynamics simulation to incorporate vibrorotational degrees of freedom into the material model. By means ofcomplete active space self-consistent field second-order perturbation theory (CASPT2) calculations on embedded clusters extracted from the resulting trajectory, we describe localized spectroscopic phenomena on NiO with an efficient treatment of electron correlation. The inclusion of thermal motion into the theoretical description allowsus to study electronic transitions that, otherwise, would be dipole forbidden in the ideal structure and results in a natural reproduction of the band broadening. Moreover, we improved the embedded cluster model by incorporating self-consistently at the complete active space self-consistent field (CASSCF) level a discrete (or direct) reaction field (DRF) in the cluster surroundings. The DRF approach offers an efficient treatment ofelectric response effects of the crystalline embedding to the electronic transitions localized in the cluster. We offer accurate theoretical estimates of the absorption spectrum and the density of states around the Fermi level of NiO, and a comprehensive explanation of the source of the broadening and the relaxation of the charge transferstates due to the adaptation of the environment