116 resultados para sum
Resumo:
The recent strides of democracy in Latin America have been associated to conflicting outcomes. The expectation that democracy would bring about peace and prosperity have been only partly satisfied. While political violence has been by and large eradicated from the sub-continent, poverty and social injustice still prevail and hold sway. Our study argues that democracy matters for inequality through the growing strength of center left and left parties and by making political leaders in general more responsive to the underprivileged. Furthermore, although the pension reforms recently enacted in the region generated overall regressive outcomes on income distribution, democratic countries still benefit from their political past: where democratic tradition was stronger, such outcomes have been milder. Democratic tradition and the specific ideological connotations of the parties in power, on the other hand, did not play an equally crucial role in securing lower levels of political violence: during the last wave of democratizations in Latin America, domestic peace was rather an outcome of political and social concessions to those in distress. In sum, together with other factors and especially economic ones, the reason why recent democratizations have provided domestic peace in most cases, but have been unable so far to solve the problem of poverty and inequality, is that democratic traditions in the subcontinent have been relatively weak and, more specifically, that this weakness has undermined the growth of left and progressive parties, acting as an obstacle to redistribution. Such weakness, on the other hand, has not prevented the drastic reduction of domestic political violence, since what mattered in this case was a combination of symbolic or material concessions and political agreements among powerful élites and counter-élites.
Resumo:
We prove a formula for the multiplicities of the index of an equivariant transversally elliptic operator on a G-manifold. The formula is a sum of integrals over blowups of the strata of the group action and also involves eta invariants of associated elliptic operators. Among the applications, we obtain an index formula for basic Dirac operators on Riemannian foliations, a problem that was open for many years.
Resumo:
This paper is concerned with the investigation of the intergenerational mobility of education in several European countries and its changes across birth cohorts (1940-1980) using a new mobility index that considers the total degree of mobility as the weighted sum of mobility with respect to both parents. Moreover, this mobility index enables the analysis of the role of family characteristics as mediating factors in the statistical association between individual and parental education. We find that Nordic countries display lower levels of educational persistence but that the degree of mobility increases over time only in those countries with low initial levels. Moreover, the results suggest that the degree of mobility with respect to fathers and mothers converges to the same level and that family characteristics account for an important part of the statistical association between parental education and children’s schooling; a particular finding is that the most important elements of family characteristics are the family’s socio-economic status and educational assortative mating of the parents.
Resumo:
The aim of this paper is to discover the origins of utility regulation in Spain, and to analyse, from a microeconomic perspective, its characteristics and the impact of regulation on consumers and utilities. Madrid and the Madrilenian utilities are taken as a case study. The electric industry in the period studied was a natural monopoly2. Each of the three phases of production, generation, transmission and distribution, had natural monopoly characteristics. Therefore, the most efficient form to generate, transmit and distribute electricity was the monopoly because one firm can produce a quantity at a lower cost than the sum of costs incurred by two or more firms. A problem arises because when a firm is the single provider it can charge prices above the marginal cost, at monopoly prices. When a monopolist reduces the quantity produced, price increases, causing the consumer to demand less than the economic efficiency level, incurring a loss of consumer surplus. The loss of the consumer surplus is not completely gained by the monopolist, causing a loss of social surplus, a deadweight loss. The main objective of regulation is going to be to reduce to a minimum the deadweight loss. Regulation is also needed because when the monopolist fixes prices at marginal cost equal marginal revenue there would be an incentive for firms to enter the market creating inefficiency. The Madrilenian industry has been chosen because of the availability of statistical information on costs and production. The complex industry structure and the atomised demand add interest to the analysis. This study will also provide some light on the tariff regulation of the period which has been poorly studied and will complement the literature on the US electric utilities regulation where a different type of regulation was implemented.
Resumo:
We prove a formula for the multiplicities of the index of an equivariant transversally elliptic operator on a G-manifold. The formula is a sum of integrals over blowups of the strata of the group action and also involves eta invariants of associated elliptic operators. Among the applications, we obtain an index formula for basic Dirac operators on Riemannian foliations, a problem that was open for many years.
Resumo:
In this paper we prove a formula for the analytic index of a basic Dirac-type operator on a Riemannian foliation, solving a problem that has been open for many years. We also consider more general indices given by twisting the basic Dirac operator by a representation of the orthogonal group. The formula is a sum of integrals over blowups of the strata of the foliation and also involves eta invariants of associated elliptic operators. As a special case, a Gauss-Bonnet formula for the basic Euler characteristic is obtained using two independent proofs.
Credit risk contributions under the Vasicek one-factor model: a fast wavelet expansion approximation
Resumo:
To measure the contribution of individual transactions inside the total risk of a credit portfolio is a major issue in financial institutions. VaR Contributions (VaRC) and Expected Shortfall Contributions (ESC) have become two popular ways of quantifying the risks. However, the usual Monte Carlo (MC) approach is known to be a very time consuming method for computing these risk contributions. In this paper we consider the Wavelet Approximation (WA) method for Value at Risk (VaR) computation presented in [Mas10] in order to calculate the Expected Shortfall (ES) and the risk contributions under the Vasicek one-factor model framework. We decompose the VaR and the ES as a sum of sensitivities representing the marginal impact on the total portfolio risk. Moreover, we present technical improvements in the Wavelet Approximation (WA) that considerably reduce the computational effort in the approximation while, at the same time, the accuracy increases.
Resumo:
El objetivo principal de este artículo es la selección y comparación de dos herramientas de análisis estático para java, esta tarea necesita de estudiar previamente el estado del arte de estos analizadores, ver qué características son deseables para este tipo de analizadores y finalmente compararlas en ejecución sobre los dos proyectos de software libre elegidos argoUML y openProj. Se compara FindBugs con PMD, dos analizadores que pueden utilizarse con la versión 1.6. de JDK. Los resultados de la comparación nos permiten deducir que los analizadores se complementan en cuanto a bugs detectados, hay pocos solapamientos. Como conclusiones podemos decir que la búsqueda de bugs necesita de más de una herramienta de análisis estático.
Resumo:
MELIBEA és un directori i validador de polítiques en favor de l'accés obert a la producció científico-acadèmica. Com a directori, descriu les polítiques institucionals existents relacionades amb l'accés obert (AO) a la producció científica i acadèmica. Com a validador, les sotmet a una anàlisi qualitatiu i quantitatiu basat en el compliment d'un conjunt d'indicadors que reflecteixen les bases en què es fonamenta una política institucional. El validador indica una puntuació i un percentatge de compliment per a cada una de les polítiques analitzades. Això es realitza a partir dels valors assignats a certs indicadors i la seva ponderació en funció de la seva importància relativa. La suma dels valors ponderats de cadascun dels indicadors s'ajusta a una escala percentual i condueix al que hem anomenat "Percentatge validat d'accés obert", el càlcul del qual s'exposa en l'apartat de Metodologia. Els tipus d'institucions que s'analitzen són universitats, centres de recerca, agències finançadores i organitzacions governamentals.
Resumo:
Emergent molecular measurement methods, such as DNA microarray, qRTPCR, andmany others, offer tremendous promise for the personalized treatment of cancer. Thesetechnologies measure the amount of specific proteins, RNA, DNA or other moleculartargets from tumor specimens with the goal of “fingerprinting” individual cancers. Tumorspecimens are heterogeneous; an individual specimen typically contains unknownamounts of multiple tissues types. Thus, the measured molecular concentrations resultfrom an unknown mixture of tissue types, and must be normalized to account for thecomposition of the mixture.For example, a breast tumor biopsy may contain normal, dysplastic and cancerousepithelial cells, as well as stromal components (fatty and connective tissue) and bloodand lymphatic vessels. Our diagnostic interest focuses solely on the dysplastic andcancerous epithelial cells. The remaining tissue components serve to “contaminate”the signal of interest. The proportion of each of the tissue components changes asa function of patient characteristics (e.g., age), and varies spatially across the tumorregion. Because each of the tissue components produces a different molecular signature,and the amount of each tissue type is specimen dependent, we must estimate the tissuecomposition of the specimen, and adjust the molecular signal for this composition.Using the idea of a chemical mass balance, we consider the total measured concentrationsto be a weighted sum of the individual tissue signatures, where weightsare determined by the relative amounts of the different tissue types. We develop acompositional source apportionment model to estimate the relative amounts of tissuecomponents in a tumor specimen. We then use these estimates to infer the tissuespecificconcentrations of key molecular targets for sub-typing individual tumors. Weanticipate these specific measurements will greatly improve our ability to discriminatebetween different classes of tumors, and allow more precise matching of each patient tothe appropriate treatment
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certainassumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur inthe proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p.There are many statistical tests being used to check whether empirical marker data obeys theHardy-Weinberg principle. Among these are the classical xi-square test (with or withoutcontinuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combinationwith Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE)are numerical in nature, requiring the computation of a test statistic and a p-value.There is however, ample space for the use of graphics in HWE tests, in particular for the ternaryplot. Nowadays, many genetical studies are using genetical markers known as SingleNucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the countsone typically computes genotype frequencies and allele frequencies. These frequencies satisfythe unit-sum constraint, and their analysis therefore falls within the realm of compositional dataanalysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotypefrequencies can be adequately represented in a ternary plot. Compositions that are in exactHWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected ina statistical test are typically “close" to the parabola, whereas compositions that differsignificantly from HWE are “far". By rewriting the statistics used to test for HWE in terms ofheterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted inthe ternary plot. This way, compositions can be tested for HWE purely on the basis of theirposition in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphicalrepresentations where large numbers of SNPs can be tested for HWE in a single graph. Severalexamples of graphical tests for HWE (implemented in R software), will be shown, using SNPdata from different human populations
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan
Resumo:
Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data
Resumo:
In most psychological tests and questionnaires, a test score is obtained bytaking the sum of the item scores. In virtually all cases where the test orquestionnaire contains multidimensional forced-choice items, this traditionalscoring method is also applied. We argue that the summation of scores obtained with multidimensional forced-choice items produces uninterpretabletest scores. Therefore, we propose three alternative scoring methods: a weakand a strict rank preserving scoring method, which both allow an ordinalinterpretation of test scores; and a ratio preserving scoring method, whichallows a proportional interpretation of test scores. Each proposed scoringmethod yields an index for each respondent indicating the degree to whichthe response pattern is inconsistent. Analysis of real data showed that withrespect to rank preservation, the weak and strict rank preserving methodresulted in lower inconsistency indices than the traditional scoring method;with respect to ratio preservation, the ratio preserving scoring method resulted in lower inconsistency indices than the traditional scoring method
Resumo:
We shall call an n × p data matrix fully-compositional if the rows sum to a constant, and sub-compositional if the variables are a subset of a fully-compositional data set1. Such data occur widely in archaeometry, where it is common to determine the chemical composition of ceramic, glass, metal or other artefacts using techniques such as neutron activation analysis (NAA), inductively coupled plasma spectroscopy (ICPS), X-ray fluorescence analysis (XRF) etc. Interest often centres on whether there are distinct chemical groups within the data and whether, for example, these can be associated with different origins or manufacturing technologies