51 resultados para Welfare Schemes
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In this paper we study the macroeconomic effects of an inflow oflow-skilled workers into an economy where there is capital accumulation and two types of agents. We find that there are substantial dynamic effects following unexpected migrations with adjustments that resemble those triggered by a sudden disruption of the capital stock. We look at the interrelations between these dynamic effects and three different fiscal systems for the redistribution of income and find that these schemes can change the dynamics and lead to prolonged periods of adjustments. Theaggregate welfare implications are sensitive to the welfare system: while there are welfare gains without redistribution, these gains may be turned into costs when the state engages in redistribution.
Resumo:
This paper analyzes the strategic decision to integrate by firms that produce complementary products. Integration entails bundling pricing. We find out that integration is privately profitable for a high enough degree of product differentiation, that profits of the non-integrated firms decrease, and that consumer surplus need not necessarily increase when firms integrate despite the fact that prices diminish. Thus, integration of a system is welfare-improving for a high enough degree of product differentiation combined with a minimum demand advantage relative to the competing system. Overall, and from a number of extensions undertaken, we conclude that bundling need not be anti-competitive and that integration should be permitted only under some circumstances.
Resumo:
Some analysts use sequential dominance criteria, and others use equivalence scales in combination with non-sequential dominance tests, to make welfare comparisons of oint distributions of income and needs. In this paper we present a new sequential procedure hich copes with situations in which sequential dominance fails. We also demonstrate that there commendations deriving from the sequential approach are valid for distributions of equivalent income whatever equivalence scale the analyst might adopt. Thus the paper marries together the sequential and equivalizing approaches, seen as alternatives in much previous literature. All results are specified in forms which allow for demographic differences in the populations being compared.
Resumo:
We investigate different models that are intended to describe the small mean free path regime of a kinetic equation, a particular attention being paid to the moment closure by entropy minimization. We introduce a specific asymptotic-induced numerical strategy which is able to treat the stiff terms of the asymptotic diffusive regime. We evaluate on numerics the performances of the method and the abilities of the reduced models to capture the main features of the full kinetic equation.
Resumo:
We analyze how a contest organizer chooses optimally the winner when the contestants' efforts are already exerted and commitment to the use of a given contest success function is not possible. We de…ne the notion of rationalizability in mixed-strategies to capture such a situation. Our approach allows to derive different contest success functions depending on the aims and attitudes of the decider. We derive contest success functions which are closely related to commonly used functions providing new support for them. By taking into account social welfare considerations our approach bridges the contest literature and the recent literature on political economy. Keywords: Endogenous Contests, Contest Success Function, Mixed-Strategies. JEL Classi…cation: C72 (Noncooperative Games), D72 (Economic Models of Political Processes: Rent-Seeking, Elections), D74 (Conflict; Conflict Resolution; Alliances)
Resumo:
This paper estimates the effect of piracy attacks on shipping costs using a unique data set on shipping contracts in the dry bulk market. We look at shipping routes whose shortest path exposes them to piracy attacks and find that the increase in attacks in 2008 lead to around a ten percent increase in shipping costs. We use this estimate to get a sense of the welfare loss imposed by piracy. Our intermediate estimate suggests that the creation of $120 million of revenue for pirates in the Somalia area led to a welfare loss of over $1.5 billion.
Resumo:
In the present paper we discuss and compare two different energy decomposition schemes: Mayer's Hartree-Fock energy decomposition into diatomic and monoatomic contributions [Chem. Phys. Lett. 382, 265 (2003)], and the Ziegler-Rauk dissociation energy decomposition [Inorg. Chem. 18, 1558 (1979)]. The Ziegler-Rauk scheme is based on a separation of a molecule into fragments, while Mayer's scheme can be used in the cases where a fragmentation of the system in clearly separable parts is not possible. In the Mayer scheme, the density of a free atom is deformed to give the one-atom Mulliken density that subsequently interacts to give rise to the diatomic interaction energy. We give a detailed analysis of the diatomic energy contributions in the Mayer scheme and a close look onto the one-atom Mulliken densities. The Mulliken density ρA has a single large maximum around the nuclear position of the atom A, but exhibits slightly negative values in the vicinity of neighboring atoms. The main connecting point between both analysis schemes is the electrostatic energy. Both decomposition schemes utilize the same electrostatic energy expression, but differ in how fragment densities are defined. In the Mayer scheme, the electrostatic component originates from the interaction of the Mulliken densities, while in the Ziegler-Rauk scheme, the undisturbed fragment densities interact. The values of the electrostatic energy resulting from the two schemes differ significantly but typically have the same order of magnitude. Both methods are useful and complementary since Mayer's decomposition focuses on the energy of the finally formed molecule, whereas the Ziegler-Rauk scheme describes the bond formation starting from undeformed fragment densities
Resumo:
Estudi realitzat a partir d’una estada a la University of British Columbia, Canada, entre 2010 i 2012. Primerament es va desenvolupar una escala per mesurar coixeses (amb valors de l’1 al 5). Aquesta escala es va utilitzar per estudiar l’associació entre factors de risc a nivell de granja (disseny de le instal.lacions i maneig) i la prevalencia de coixeses a Nord America. Les dades es van recollir en un total de 40 granges al Nord Est dels E.E.U.U (NE) i 39 a California (CA) . Totes les vaques del group mes productiu es van categoritzar segons la severitat de les coixeses: sanes, coixes i severament coixes. La prevalencia de coixeses en general fou del 55 % a NE i del 31% a CA. La prevalencia de coixeses severes fou del 8% a NE i del 4% a Ca. A NE, les coixeses en general increntaren amb la presencia de serradura als llits i disminuiren en granjes grans, amb major quantitat de llit i acces a pastura. Les coixeses mes severes incrementaren amb la falta d’higiene als llit i amb la presencia de serradura als llits, i disminuiren amb la quantitat de llit proveit, l’us de sorra als llits i amb la mida de la granja. A CA, les coixeses en general incrementaren amb la falta d’higiene al llit, i disminuiren amb la mida de la granja, la presencia de terres de goma, l’increment d’espai als cubicles , l’espai a l’abeuredor i la desinfeccio de les peulles. Les coixeses severes incrementaren amb la falta d’higiene al llit i disminuixen amb la frequencia de neteja del corral. En conclusio, canvis en el maneig i el disseny de les instal.lacions poden ajudar a disminuir la prevalencia de coixeses, tot i que les estrategies a seguir variaran segons la regio.
Resumo:
The space and time discretization inherent to all FDTD schemesintroduce non-physical dispersion errors, i.e. deviations ofthe speed of sound from the theoretical value predicted bythe governing Euler differential equations. A generalmethodologyfor computing this dispersion error via straightforwardnumerical simulations of the FDTD schemes is presented.The method is shown to provide remarkable accuraciesof the order of 1/1000 in a wide variety of twodimensionalfinite difference schemes.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
Children occupy centre-stage in any new welfare equilibrium. Failure to support families may produce either of two undesirable scenarios. We shall see a society without children if motherhood remains incompatible with work. A new family policy needs to recognize that children are a collective asset and that the cost of having children is rising. The double challenge is to eliminate the constraints on having children in the first place, and to ensure that the children we have are ensured optimal opportunities. The simple reason why a new social contract is called for is that fertility and child quality combine both private utility and societal gains. And like no other epoch in the past, the societal gains are mounting all-the-while that families’ ability to produce these social gains is weakening.In the following 1 analyze the twin challenges of fertility and child development. I then examine which kind of policy mix will ensure both the socially desired level of fertility and investment in our children? The task is to identify a Paretian optimum that will maximize efficiency gains and social equity simultaneously.
Resumo:
The efficacy of social care, publicly and universally provided, has been contested from two different points of view. First, advocates of targeting social policy criticized the Matthew’s effect of universal provision and; second, theories arguing in favour of heterogeneous rationalities between men and women and, even different preferences among women, predict that universal provision of services is limiting women’s choices more than home allowances. The author tests both hypotheses and concludes that, at least in the case of adult care, women’s choices are significantly affected by women’s social positions and by the availability of public services. Furthermore, targeting through means-test eligibility criteria has no significant effect on inequality but, confirming the redistributive paradox, reduces women’s options.
Resumo:
In this paper we present a simple theory-based measure of the variations in aggregate economic efficiency: the gap between the marginal product of labor and the household s consumption/leisure tradeoff. We show that this indicator corresponds to the inverse of the markup of price over social marginal cost, and give some evidence in support of this interpretation. We then show that, with some auxilliary assumptions our gap variable may be used to measure the efficiency costs of business fluctuations. We find that the latter costs are modest on average. However, to the extent the flexible price equilibrium is distorted, the gross efficiency losses from recessions and gains from booms may be large. Indeed, we find that the major recessions involved large efficiency losses. These results hold for reasonable parameterizations of the Frisch elasticity of labor supply, the coefficient of relative risk aversion, and steady state distortions.
Resumo:
Two main school choice mechanisms have attracted the attention in the literature: Boston and deferred acceptance (DA). The question arises on the ex-ante welfareimplications when the game is played by participants that vary in terms of their strategicsophistication. Abdulkadiroglu, Che and Yasuda (2011) have shown that the chances ofnaive participants getting into a good school are higher under the Boston mechanism thanunder DA, and some naive participants are actually better off. In this note we show thatthese results can be extended to show that, under the veil of ignorance, i.e. students not yetknowing their utility values, all naive students may prefer to adopt the Boston mechanism.