68 resultados para Proportional counters.
Resumo:
The preceding two editions of CoDaWork included talks on the possible considerationof densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended theEuclidean structure of the simplex to a Hilbert space structure of the set of densitieswithin a bounded interval, and van den Boogaart (2005) generalized this to the setof densities bounded by an arbitrary reference density. From the many variations ofthe Hilbert structures available, we work with three cases. For bounded variables, abasis derived from Legendre polynomials is used. For variables with a lower bound, westandardize them with respect to an exponential distribution and express their densitiesas coordinates in a basis derived from Laguerre polynomials. Finally, for unboundedvariables, a normal distribution is used as reference, and coordinates are obtained withrespect to a Hermite-polynomials-based basis.To get the coordinates, several approaches can be considered. A numerical accuracyproblem occurs if one estimates the coordinates directly by using discretized scalarproducts. Thus we propose to use a weighted linear regression approach, where all k-order polynomials are used as predictand variables and weights are proportional to thereference density. Finally, for the case of 2-order Hermite polinomials (normal reference)and 1-order Laguerre polinomials (exponential), one can also derive the coordinatesfrom their relationships to the classical mean and variance.Apart of these theoretical issues, this contribution focuses on the application of thistheory to two main problems in sedimentary geology: the comparison of several grainsize distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock orsediment, like their composition
Resumo:
CO2 emissions induced by human activities are the major cause of climate change; hence, strong environmental policy that limits the growing dependence on fossil fuel is indispensable. Tradable permits and environmental taxes are the usual tools used in CO2 reduction strategies. Such economic tools provide incentives to polluting industries to reduce their emissions through market signals. The aim of this work is to investigate the direct and indirect effects of an environmental tax on Spanish products and services. We apply an environmentally extended input-output (EIO) model to identify CO2 emission intensities of products and services and, accordingly, we estimate the tax proportional to these intensities. The short-term price effects are analyzed using an input-output price model. The effect of tax introduction on consumption prices and its influence on consumers’ welfare are determined. We also quantify the environmental impacts of such taxation in terms of the reduction in CO2 emissions. The results, based on the Spanish economy for the year 2007, show that sectors with relatively poor environmental profile are subjected to high environmental tax rates. And consequently, applying a CO2 tax on these sectors, increases production prices and induces a slight increase in consumer price index and a decrease in private welfare. The revenue from the tax could be used to counter balance the negative effects on social welfare and also to stimulate the increase of renewable energy shares in the most impacting sectors. Finally, our analysis highlights that the environmental and economic goals cannot be met at the same time with the environmental taxation and this shows the necessity of finding other (complementary or alternative) measures to ensure both the economic and ecological efficiencies. Keywords: CO2 emissions; environmental tax; input-output model, effects of environmental taxation.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.
Resumo:
We argue that long term sustainability of social security systems requires not only better equilibrium between the proportion in retirement and in employment but also an equitable distribution of the additional financial burden that aging inevitably will require. We examine how a proportional fixed ratios model of burden sharing between the aged and non-aged will establish inter-generational equity. Additionally we address the question of intra-generational equity and argue that the positive association between lifetime income and longevity requires more progressive financing of pensions and of care for the elderly.
Resumo:
We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.
Resumo:
This article presents, discusses and tests the hypothesis that it is the number of parties what can explain the choice of electoral systems, rather than the other way round. Already existing political parties tend to choose electoral systems that, rather than generate new party systems by themselves, will crystallize, consolidate or reinforce previously existing party configurations. A general model develops the argument and presents the concept of 'behavioral-institutional equilibrium' to account for the relation between electoral systems and party systems. The most comprehensive dataset and test of these notions to date, encompassing 219 elections in 87 countries since the 19th century, are presented. The analysis gives strong support to the hypotheses that political party configurations dominated by a few parties tend to establish majority rule electoral systems, while multiparty systems already existed before the introduction of proportional representation. It also offers the new theoretical proposition that strategic party choice of electoral systems leads to a general trend toward proportional representation over time.
Resumo:
The origins of electoral systems have received scant attention in the literature. Looking at the history of electoral rules in the advanced world in the last century, this paper shows that the existing wide variation in electoral rules across nations can be traced to the strategic decisions that the current ruling parties, anticipating the coordinating consequences of different electoral regimes, make to maximize their representation according to the following conditions. On the one hand, as long as the electoral arena does not change substantially and the current electoral regime serves the ruling parties well, the latter have no incentives to modify the electoral regime. On the other hand, as soon as the electoral arena changes (due to the entry of new voters or a change in their preferences), the ruling parties will entertain changing the electoral system, depending on two main conditions: the emergence of new parties and the coordinating capacities of the old ruling parties. Accordingly, if the new parties are strong, the old parties shift from plurality/majority rules to proportional representation (PR) only if the latter are locked into a 'non-Duvergerian' equilibrium; i.e. if no old party enjoys a dominant position (the case of most small European states)--conversely, they do not if a Duvergerian equilibrium exists (the case of Great Britain). Similarly, whenever the new entrants are weak, a non-PR system is maintained, regardless of the structure of the old party system (the case of the USA). The paper discusses as well the role of trade and ethnic and religious heterogeneity in the adoption of PR rules.
Resumo:
We propose a simple adaptive procedure for playing a game. In thisprocedure, players depart from their current play with probabilities thatare proportional to measures of regret for not having used other strategies(these measures are updated every period). It is shown that our adaptiveprocedure guaranties that with probability one, the sample distributionsof play converge to the set of correlated equilibria of the game. Tocompute these regret measures, a player needs to know his payoff functionand the history of play. We also offer a variation where every playerknows only his own realized payoff history (but not his payoff function).
Resumo:
In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.
Resumo:
The old, understudied electoral system composed of multi-member districts, open ballot and plurality rule is presented as the most remote scene of the origin of both political parties and new electoral systems. A survey of the uses of this set of electoral rules in different parts of the world during remote and recent periods shows its wide spread. A model of voting by this electoral system demonstrates that, while it can produce varied and pluralistic representation, it also provides incentives to form factional or partisan candidacies. Famous negative reactions to the emergence of factions and political parties during the 18th and 19th centuries are reinterpreted in this context. Many electoral rules and procedures invented since the second half of the 19th century, including the Australian ballot, single-member districts, limited and cumulative ballots, and proportional representation rules, derived from the search to reduce the effects of the originating multi-member district system in favor of a single party sweep. The general relations between political parties and electoral systems are restated to account for the foundational stage here discussed.
Resumo:
[spa] En este artículo presentamos una nueva estrategia de reaseguro, a la que denominamos estrategia de reaseguro umbral, que actúa de forma diferente en función del nivel de las reservas. Así, para unos niveles de las reservas inferiores a un determinado nivel, el gestor decide aplicar un reaseguro proporcional, y para niveles superiores, al considerar que se ha alcanzado cierta solvencia en la cartera, opta por no ceder ningún porcentaje del riesgo. El análisis del efecto de la introducción del reaseguro umbral sobre la probabilidad de supervivencia, y su comparación con el reaseguro proporcional y la opción de no reasegurar, nos permite hallar estrategias de reaseguro equivalentes desde el punto de vista de la solvencia. Palabras clave: teoría del riesgo, reaseguro de umbral, reaseguro proporcional, probabilidad de supervivencia.
Resumo:
Using the extended Thomas-Fermi version of density-functional theory (DFT), calculations are presented for the barrier for the reaction Na20++Na20+¿Na402+. The deviation from the simple Coulomb barrier is shown to be proportional to the electron density at the bond midpoint of the supermolecule (Na20+)2. An extension of conventional quantum-chemical studies of homonuclear diatomic molecular ions is then effected to apply to the supermolecular ions of the alkali metals. This then allows the Na results to be utilized to make semiquantitative predictions of position and height of the maximum of the fusion barrier for other alkali clusters. These predictions are confirmed by means of similar DFT calculations for the K clusters.
Resumo:
The part proportional to the Euler-Poincar characteristic of the contribution of spin-2 fields to the gravitational trace anomaly is computed. It is seen to be of the same sign as all the lower-spin contributions, making anomaly cancellation impossible. Subtleties related to Weyl invariance, gauge independence, ghosts, and counting of degrees of freedom are pointed out.
Resumo:
During plastic deformation of crystalline materials, the collective dynamics of interacting dislocations gives rise to various patterning phenomena. A crucial and still open question is whether the long range dislocation-dislocation interactions which do not have an intrinsic range can lead to spatial patterns which may exhibit well-defined characteristic scales. It is demonstrated for a general model of two-dimensional dislocation systems that spontaneously emerging dislocation pair correlations introduce a length scale which is proportional to the mean dislocation spacing. General properties of the pair correlation functions are derived, and explicit calculations are performed for a simple special case, viz pair correlations in single-glide dislocation dynamics. It is shown that in this case the dislocation system exhibits a patterning instability leading to the formation of walls normal to the glide plane. The results are discussed in terms of their general implications for dislocation patterning.