110 resultados para Divisor of Zero
Resumo:
This paper studies the limits of discrete time repeated games with public monitoring. We solve and characterize the Abreu, Milgrom and Pearce (1991) problem. We found that for the "bad" ("good") news model the lower (higher) magnitude events suggest cooperation, i.e., zero punishment probability, while the highrt (lower) magnitude events suggest defection, i.e., punishment with probability one. Public correlation is used to connect these two sets of signals and to make the enforceability to bind. The dynamic and limit behavior of the punishment probabilities for variations in ... (the discount rate) and ... (the time interval) are characterized, as well as the limit payo¤s for all these scenarios (We also introduce uncertainty in the time domain). The obtained ... limits are to the best of my knowledge, new. The obtained ... limits coincide with Fudenberg and Levine (2007) and Fudenberg and Olszewski (2011), with the exception that we clearly state the precise informational conditions that cause the limit to converge from above, to converge from below or to degenerate. JEL: C73, D82, D86. KEYWORDS: Repeated Games, Frequent Monitoring, Random Pub- lic Monitoring, Moral Hazard, Stochastic Processes.
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completelyabsent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and byMartín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involvedparts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method isintroduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that thetheoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approachhas reasonable properties from a compositional point of view. In particular, it is “natural” in the sense thatit recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in thesame paper a substitution method for missing values on compositional data sets is introduced
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositionaldata. Nevertheless, the use of this methodology is only possible for those data setswithout null values. Consequently, in those data sets where the zeros are present, aprevious treatment becomes necessary. Last advances in the treatment of compositionalzeros have been centered especially in the zeros of structural nature and in the roundedzeros. These tools do not contemplate the particular case of count compositional datasets with null values. In this work we deal with \count zeros" and we introduce atreatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichletprobability distribution as a prior and we estimate the posterior probabilities. Then weapply a multiplicative modi¯cation for the non-zero values. We present a case studywhere this new methodology is applied.Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
Miralls deformables més i més grans, amb cada cop més actuadors estan sent utilitzats actualment en aplicacions d'òptica adaptativa. El control dels miralls amb centenars d'actuadors és un tema de gran interès, ja que les tècniques de control clàssiques basades en la seudoinversa de la matriu de control del sistema es tornen massa lentes quan es tracta de matrius de dimensions tan grans. En aquesta tesi doctoral es proposa un mètode per l'acceleració i la paral.lelitzacó dels algoritmes de control d'aquests miralls, a través de l'aplicació d'una tècnica de control basada en la reducció a zero del components més petits de la matriu de control (sparsification), seguida de l'optimització de l'ordenació dels accionadors de comandament atenent d'acord a la forma de la matriu, i finalment de la seva posterior divisió en petits blocs tridiagonals. Aquests blocs són molt més petits i més fàcils de fer servir en els càlculs, el que permet velocitats de càlcul molt superiors per l'eliminació dels components nuls en la matriu de control. A més, aquest enfocament permet la paral.lelització del càlcul, donant una com0onent de velocitat addicional al sistema. Fins i tot sense paral. lelització, s'ha obtingut un augment de gairebé un 40% de la velocitat de convergència dels miralls amb només 37 actuadors, mitjançant la tècnica proposada. Per validar això, s'ha implementat un muntatge experimental nou complet , que inclou un modulador de fase programable per a la generació de turbulència mitjançant pantalles de fase, i s'ha desenvolupat un model complert del bucle de control per investigar el rendiment de l'algorisme proposat. Els resultats, tant en la simulació com experimentalment, mostren l'equivalència total en els valors de desviació després de la compensació dels diferents tipus d'aberracions per als diferents algoritmes utilitzats, encara que el mètode proposat aquí permet una càrrega computacional molt menor. El procediment s'espera que sigui molt exitós quan s'aplica a miralls molt grans.
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
In the finite field (FF) treatment of vibrational polarizabilities and hyperpolarizabilities, the field-free Eckart conditions must be enforced in order to prevent molecular reorientation during geometry optimization. These conditions are implemented for the first time. Our procedure facilities identification of field-induced internal coordinates that make the major contribution to the vibrational properties. Using only two of these coordinates, quantitative accuracy for nuclear relaxation polarizabilities and hyperpolarizabilities is achieved in π-conjugated systems. From these two coordinates a single most efficient natural conjugation coordinate (NCC) can be extracted. The limitations of this one coordinate approach are discussed. It is shown that the Eckart conditions can lead to an isotope effect that is comparable to the isotope effect on zero-point vibrational averaging, but with a different mass-dependence
Resumo:
To obtain a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the methane C-H bond to the palladium atom, we have explored this PES using a hierarchical series of ab initio methods (Hartree-Fock, second-order Møller-Plesset perturbation theory, fourth-order Møller-Plesset perturbation theory with single, double and quadruple excitations, coupled cluster theory with single and double excitations (CCSD), and with triple excitations treated perturbatively [CCSD(T)]) and hybrid density functional theory using the B3LYP functional, in combination with a hierarchical series of ten Gaussian-type basis sets, up to g polarization. Relativistic effects are taken into account either through a relativistic effective core potential for palladium or through a full four-component all-electron approach. Counterpoise corrected relative energies of stationary points are converged to within 0.1-0.2 kcal/mol as a function of the basis-set size. Our best estimate of kinetic and thermodynamic parameters is -8.1 (-8.3) kcal/mol for the formation of the reactant complex, 5.8 (3.1) kcal/mol for the activation energy relative to the separate reactants, and 0.8 (-1.2) kcal/mol for the reaction energy (zero-point vibrational energy-corrected values in parentheses). This agrees well with available experimental data. Our work highlights the importance of sufficient higher angular momentum polarization functions, f and g, for correctly describing metal-d-electron correlation and, thus, for obtaining reliable relative energies. We show that standard basis sets, such as LANL2DZ+ 1f for palladium, are not sufficiently polarized for this purpose and lead to erroneous CCSD(T) results. B3LYP is associated with smaller basis set superposition errors and shows faster convergence with basis-set size but yields relative energies (in particular, a reaction barrier) that are ca. 3.5 kcal/mol higher than the corresponding CCSD(T) values
Resumo:
Initial convergence of the perturbation series expansion for vibrational nonlinear optical (NLO) properties was analyzed. The zero-point vibrational average (ZPVA) was obtained through first-order in mechanical plus electrical anharmonicity. Results indicated that higher-order terms in electrical and mechanical anharmonicity can make substantial contributions to the pure vibrational polarizibility of typical NLO molecules
Resumo:
The longwave emission of planetary atmospheres that contain a condensable absorbing gas in the infrared (i.e., longwave), which is in equilibrium with its liquid phase at the surface, may exhibit an upper bound. Here we analyze the effect of the atmospheric absorption of sunlight on this radiation limit. We assume that the atmospheric absorption of infrared radiation is independent of wavelength except within the spectral width of the atmospheric window, where it is zero. The temperature profile in radiative equilibrium is obtained analytically as a function of the longwave optical thickness. For illustrative purposes, numerical values for the infrared atmospheric absorption (i.e., greenhouse effect) and the liquid vapor equilibrium curve of the condensable absorbing gas refer to water. Values for the atmospheric absorption of sunlight (i.e., antigreenhouse effect) take a wide range since our aim is to provide a qualitative view of their effects. We find that atmospheres with a transparent region in the infrared spectrum do not present an absolute upper bound on the infrared emission. This result may be also found in atmospheres opaque at all infrared wavelengths if the fraction of absorbed sunlight in the atmosphere increases with the longwave opacity
Resumo:
We study a novel class of noisy rational expectations equilibria in markets with largenumber of agents. We show that, as long as noise increases with the number of agents inthe economy, the limiting competitive equilibrium is well-defined and leads to non-trivialinformation acquisition, perfect information aggregation, and partially revealing prices,even if per capita noise tends to zero. We find that in such equilibrium risk sharing and price revelation play dierent roles than in the standard limiting economy in which per capita noise is not negligible. We apply our model to study information sales by a monopolist, information acquisition in multi-asset markets, and derivatives trading. Thelimiting equilibria are shown to be perfectly competitive, even when a strategic solutionconcept is used.
Resumo:
We show that every finite N-player normal form game possesses a correlated equilibrium with a precise lower bound on the number of outcomes to which it assigns zero probability. In particular, the largest games with a unique fully supported correlated equilibrium are two-player games; moreover, the lower bound grows exponentially in the number of players N.
Resumo:
Traditional economic wisdom says that free entry in a market will drive profits down to zero. This conclusion is usually drawn under the assumption of perfect information. We assumethat a priori there exists imperfect information about theprofitability of the market, but that potential entrants maylearn the demand curve perfectly at negligible cost byengaging in market research. Even if in equilibrium firmslearn the demand perfectly, profits may be strictly positivebecause of insufficient entry. The mere fact that it will notbecome common knowledge that every entrant has perfectinformation about demand causes this surprising result. Belief means doubt. Knowing means certainty. Introduction to the Kabalah.
Resumo:
This paper provides an analytical characterization of Markov perfectequilibria in a politico-economic model with repeated voting, whereagents vote over distortionary income redistribution. The key featureof the theory is that the future constituency of redistributive policiesdepends positively on the current level of redistribution, since thisaffects both private investments and the future distribution of voters.Agents vote rationally and fullly anticipate the effects of their politicalchoice on both private incentives and future voting outcomes. The modelfeatures multiple equilibria. In "pro-welfare" equilibria, both welfarestate policies and their effects on distribution persist forever. In"anti-welfare equilibria", even a majority of beneficiaries ofredistributive policies vote strategically so as to induce the formationof a future majority that will vote for zero redistribution.
Resumo:
This paper studies the effects of uncertain lifetime on capitalaccumulation and growth and also the sensitivity of thoseeffects to the existence of a perfect annuities market. Themodel is an overlapping generations model with uncertainlifetimes. The technology is convex and such that the marginalproduct of capital is bounded away from zero. A contribution ofthis paper is to show that the existence of accidental bequestsmay lead the economy to an equilibrium that exhibits asymptoticgrowth, which is impossible in an economy with a perfect annuitiesmarket or with certain lifetimes. This paper also shows that ifindividuals face a positive probability of surviving in everyperiod, they may be willing to save at any age. This effect ofuncertain lifetime on savings may also lead the economy to anequilibrium exhibiting asymptotic growth even if there exists aperfect annuities market.
Resumo:
This paper shows how recently developed regression-based methods for thedecomposition of health inequality can be extended to incorporateindividual heterogeneity in the responses of health to the explanatoryvariables. We illustrate our method with an application to the CanadianNPHS of 1994. Our strategy for the estimation of heterogeneous responsesis based on the quantile regression model. The results suggest that thereis an important degree of heterogeneity in the association of health toexplanatory variables which, in turn, accounts for a substantial percentageof inequality in observed health. A particularly interesting finding isthat the marginal response of health to income is zero for healthyindividuals but positive and significant for unhealthy individuals. Theheterogeneity in the income response reduces both overall health inequalityand income related health inequality.