974 resultados para Lorenz curve
Resumo:
This paper shows the Gini Coefficient, the dissimilarity Index and the Lorenz Curve for the Spanish Port System by type of goods from 1960 to the year 2010 for business units: Total traffic, Liquid bulk cargo, Solid bulk cargo, General Merchandise and Container (TEUs) with the aim of carcaterizar the Spanish port systems in these periods and propose future strategies.
Resumo:
Canonical Correlation Analysis for Interpreting Airborne Laser Scanning Metrics along the Lorenz Curve of Tree Size Inequality
Resumo:
The purpose of this study was to compare a number of state-of-the-art methods in airborne laser scan- ning (ALS) remote sensing with regards to their capacity to describe tree size inequality and other indi- cators related to forest structure. The indicators chosen were based on the analysis of the Lorenz curve: Gini coefficient ( GC ), Lorenz asymmetry ( LA ), the proportions of basal area ( BALM ) and stem density ( NSLM ) stocked above the mean quadratic diameter. Each method belonged to one of these estimation strategies: (A) estimating indicators directly; (B) estimating the whole Lorenz curve; or (C) estimating a complete tree list. Across these strategies, the most popular statistical methods for area-based approach (ABA) were used: regression, random forest (RF), and nearest neighbour imputation. The latter included distance metrics based on either RF (NN–RF) or most similar neighbour (MSN). In the case of tree list esti- mation, methods based on individual tree detection (ITD) and semi-ITD, both combined with MSN impu- tation, were also studied. The most accurate method was direct estimation by best subset regression, which obtained the lowest cross-validated coefficients of variation of their root mean squared error CV(RMSE) for most indicators: GC (16.80%), LA (8.76%), BALM (8.80%) and NSLM (14.60%). Similar figures [CV(RMSE) 16.09%, 10.49%, 10.93% and 14.07%, respectively] were obtained by MSN imputation of tree lists by ABA, a method that also showed a number of additional advantages, such as better distributing the residual variance along the predictive range. In light of our results, ITD approaches may be clearly inferior to ABA with regards to describing the structural properties related to tree size inequality in for- ested areas.
Resumo:
OBJECTIVE To analyze the patterns and legal requirements of methylphenidate consumption. METHODS We conducted a cross-sectional study of the data from prescription notification forms and balance lists of drugs sales – psychoactive and others – subject to special control in the fifth largest city of Brazil, in 2006. We determined the defined and prescribed daily doses, the average prescription and dispensation periods, and the regional sales distribution in the municipality. In addition, we estimated the costs of drug acquisition and analyzed the individual drug consumption profile using the Lorenz curve. RESULTS The balance lists data covered all notified sales of the drug while data from prescription notification forms covered 50.6% of the pharmacies that sold it, including those with the highest sales volumes. Total methylphenidate consumption was 0.37 DDD/1,000 inhabitants/day. Sales were concentrated in more developed areas, and regular-release tablets were the most commonly prescribed pharmaceutical formulation. In some regions of the city, approximately 20.0% of the prescriptions and dispensation exceeded 30 mg/day and 30 days of treatment. CONCLUSIONS Methylphenidate was widely consumed in the municipality and mainly in the most developed areas. Of note, the consumption of formulations with the higher abuse risk was the most predominant. Both its prescription and dispensation contrasted with current pharmacotherapeutic recommendations and legal requirements. Therefore, the commercialization of methylphenidate should be monitored more closely, and its use in the treatment of behavioral changes of psychological disorders needs to be discussed in detail, in line with the concepts of the quality use of medicines.
Resumo:
Introduction More than half of the malaria cases reported in the Americas are from the Brazilian Amazon region. While malaria is considered endemic in this region, its geographical distribution is extremely heterogeneous. Therefore, it is important to investigate the distribution of malaria and to determine regions whereby action might be necessary. Methods Changes in malaria indicators in all municipalities of the Brazilian Amazon between 2003-2004 and 2008-2009 were studied. The malaria indicators included the absolute number of malaria cases and deaths, the bi-annual parasite incidence (BPI), BPI ratios and differences, a Lorenz curve and Gini coefficients. Results During the study period, mortality from malaria remained low (0.02% deaths/case), the percent of municipalities that became malaria-free increased from 15.6% to 31.7%, and the Gini coefficient increased from 82% to 87%. In 2003, 10% of the municipalities with the highest BPI accumulated 67% of all malaria cases, compared with 2009, when 10% of the municipalities (with the highest BPI) had 80% of the malaria cases. Conclusions This study described an overall decrease in malaria transmission in the Brazilian Amazon region. As expected, an increased heterogeneity of malaria indicators was found, which reinforces the notion that a single strategy may not bring about uniformly good outcomes. The geographic clustering of municipalities identified as problem areas might help to define better intervention methods.
Resumo:
Presentation given at the APHO Staff Conference 2004. Includes slides about how the distribution of a variable (inequality) can theoretically be modified and how a Lorenz curve is drawn and Gini coefficient calculated.
Resumo:
Although the histogram is the most widely used density estimator, itis well--known that the appearance of a constructed histogram for a given binwidth can change markedly for different choices of anchor position. In thispaper we construct a stability index $G$ that assesses the potential changesin the appearance of histograms for a given data set and bin width as theanchor position changes. If a particular bin width choice leads to an unstableappearance, the arbitrary choice of any one anchor position is dangerous, anda different bin width should be considered. The index is based on the statisticalroughness of the histogram estimate. We show via Monte Carlo simulation thatdensities with more structure are more likely to lead to histograms withunstable appearance. In addition, ignoring the precision to which the datavalues are provided when choosing the bin width leads to instability. We provideseveral real data examples to illustrate the properties of $G$. Applicationsto other binned density estimators are also discussed.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
The present study focuses attention on defining certain measures of income inequality for the truncated distributions and characterization of probability distributions using the functional form of these measures, extension of some measures of inequality and stability to higher dimensions, characterization of bivariate models using the above concepts and estimation of some measures of inequality using the Bayesian techniques. The thesis defines certain measures of income inequality for the truncated distributions and studies the effect of truncation upon these measures. An important measure used in Reliability theory, to measure the stability of the component is the residual entropy function. This concept can advantageously used as a measure of inequality of truncated distributions. The geometric mean comes up as handy tool in the measurement of income inequality. The geometric vitality function being the geometric mean of the truncated random variable can be advantageously utilized to measure inequality of the truncated distributions. The study includes problem of estimation of the Lorenz curve, Gini-index and variance of logarithms for the Pareto distribution using Bayesian techniques.
Resumo:
Partial moments are extensively used in actuarial science for the analysis of risks. Since the first order partial moments provide the expected loss in a stop-loss treaty with infinite cover as a function of priority, it is referred as the stop-loss transform. In the present work, we discuss distributional and geometric properties of the first and second order partial moments defined in terms of quantile function. Relationships of the scaled stop-loss transform curve with the Lorenz, Gini, Bonferroni and Leinkuhler curves are developed
Resumo:
This paper investigates the income inequality generated by a jobsearch process when di§erent cohorts of homogeneous workers are allowed to have di§erent degrees of impatience. Using the fact the average wage under the invariant Markovian distribution is a decreasing function of the discount factor (Cysne (2004, 2006)), I show that the Lorenz curve and the between-cohort Gini coe¢ cient of income inequality can be easily derived in this case. An example with arbitrary measures regarding the wage o§ers and the distribution of time preferences among cohorts provides some insights into how much income inequality can be generated, and into how it varies as a function of the probability of unemployment and of the probability that the worker does not Önd a job o§er each period.
Resumo:
This paper investigates the income inequality generated by a jobsearch process when di§erent cohorts of homogeneous workers are allowed to have di§erent degrees of impatience. Using the fact the average wage under the invariant Markovian distribution is a decreasing function of the time preference (Cysne (2004)), I show that the Lorenz curve and the between-cohort Gini coe¢ cient of income inequality can be easily derived in this case. An example with arbitrary measures regarding the wage o§ers and the distribution of time preferences among cohorts provides some quantitative insights into how much income inequality can be generated, and into how it varies as a function of the probability of unemployment and of the probability that the worker does not Önd a job o§er each period.
Resumo:
En la primera parte del presente trabajo se investigan diferentes formas de cálculo de la razón de concentración conocida como Coeficiente o Índice de Gini, y el no cumplimiento del axioma conocido como de "invariancia a la replicación" o "Principio de Población de Dalton" en algunas de ellas. El alcance de las conclusiones se limita al comportamiento de las fórmulas sometidas a prueba (se encuentran entre las más conocidas) cuando son aplicadas a distribuciones de datos desagregados. En la segunda parte se propone un factor de corrección para las fórmulas de cálculo analizadas, de manera que satisfagan el Principio de Población.
Resumo:
En la primera parte del presente trabajo se investigan diferentes formas de cálculo de la razón de concentración conocida como Coeficiente o Índice de Gini, y el no cumplimiento del axioma conocido como de "invariancia a la replicación" o "Principio de Población de Dalton" en algunas de ellas. El alcance de las conclusiones se limita al comportamiento de las fórmulas sometidas a prueba (se encuentran entre las más conocidas) cuando son aplicadas a distribuciones de datos desagregados. En la segunda parte se propone un factor de corrección para las fórmulas de cálculo analizadas, de manera que satisfagan el Principio de Población.
Resumo:
En la primera parte del presente trabajo se investigan diferentes formas de cálculo de la razón de concentración conocida como Coeficiente o Índice de Gini, y el no cumplimiento del axioma conocido como de "invariancia a la replicación" o "Principio de Población de Dalton" en algunas de ellas. El alcance de las conclusiones se limita al comportamiento de las fórmulas sometidas a prueba (se encuentran entre las más conocidas) cuando son aplicadas a distribuciones de datos desagregados. En la segunda parte se propone un factor de corrección para las fórmulas de cálculo analizadas, de manera que satisfagan el Principio de Población.