5 resultados para Mean-variance.
em Universitat de Girona, Spain
Resumo:
Aquesta tesi té la intenció de realitzar una contribució metodològica en el camp de la direcció estratègica, per mitjà de tres objectius: la revisió del concepte de risc ex post o realitzat per l'àmbit de la direcció estratègica; la concreció d'aquest concepte en una mesura de risc vàlida; i l'exploració de les possibilitats i l'interès de la descomposició del risc en diferents determinants que puguin explicar-ne la seva naturalesa. El primer objectiu es du a terme prenent com a base el concepte intuïtiu de risc i revisant la literatura en els camps més afins, especialment en la teoria comportamental de la decisió i la direcció estratègica. L'anàlisi porta a formular el risc ex post d'una activitat com el grau en què no s'han assolit els objectius per a aquesta activitat. La concreció d'aquesta definició al camp de la direcció estratègica implica que els objectius han de portar a l'obtenció de l'avantatge competitiu sostenible, el que descobreix l'interès de realitzar la mesura del risc a curt termini, és a dir, estàticament, i a llarg termini, és a dir, dinàmicament, pel que es defineix una mesura de Risc Estàtic i una altra de Risc dinàmic, respectivament. En l'anàlisi apareixen quatre dimensions conceptuals bàsiques a incorporar en les mesures: sign dependence, relativa, longitudinal i path dependence. Addicionalment, la consideració de que els resultats puguin ser cardinals o ordinals justifica que es formulin les dues mesures anteriors per a resultats cardinals i, en segon lloc, per a resultats ordinals. Les mesures de risc que es proposen sintetitzen els resultats ex post obtinguts en una mesura de centralitat relativa dels resultats, el Risc Estàtic, i una mesura de la tendència temporal dels resultats, el Risc Dinàmic. Aquesta proposta contrasta amb el plantejament tradicional dels models esperança-variància. Les mesures desenvolupades s'avaluen amb un sistema de propietats conceptuals i tècniques que s'elaboren expressament en la tesi i que permeten demostrar el seu gra de validesa i el de les mesures existents en la literatura, destacant els problemes de validesa d'aquestes darreres. També es proporciona un exemple teòric il·lustratiu de les mesures proposades que dóna suport a l'avaluació realitzada amb el sistema de propietats. Una contribució destacada d'aquesta tesi és la demostració de que les mesures de risc proposades permeten la descomposició additiva del risc si els resultats o diferencials de resultats es descomponen additivament. Finalment, la tesi inclou una aplicació de les mesures de Risc Estàtic i Dinàmic cardinals, així com de la seva descomposició, a l'anàlisi de la rendibilitat del sector bancari espanyol, en el període 1987-1999. L'aplicació il·lustra la capacitat de les mesures proposades per a analitzar la manifestació de l'avantatge competitiu, la seva evolució i naturalesa econòmica. En les conclusions es formulen possibles línees d'investigació futures.
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging. When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positive variables, has no straightforward way to build consistent and optimal confidence intervals for estimates. These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure
Resumo:
Hydrogeological research usually includes some statistical studies devised to elucidate mean background state, characterise relationships among different hydrochemical parameters, and show the influence of human activities. These goals are achieved either by means of a statistical approach or by mixing models between end-members. Compositional data analysis has proved to be effective with the first approach, but there is no commonly accepted solution to the end-member problem in a compositional framework. We present here a possible solution based on factor analysis of compositions illustrated with a case study. We find two factors on the compositional bi-plot fitting two non-centered orthogonal axes to the most representative variables. Each one of these axes defines a subcomposition, grouping those variables that lay nearest to it. With each subcomposition a log-contrast is computed and rewritten as an equilibrium equation. These two factors can be interpreted as the isometric log-ratio coordinates (ilr) of three hidden components, that can be plotted in a ternary diagram. These hidden components might be interpreted as end-members. We have analysed 14 molarities in 31 sampling stations all along the Llobregat River and its tributaries, with a monthly measure during two years. We have obtained a bi-plot with a 57% of explained total variance, from which we have extracted two factors: factor G, reflecting geological background enhanced by potash mining; and factor A, essentially controlled by urban and/or farming wastewater. Graphical representation of these two factors allows us to identify three extreme samples, corresponding to pristine waters, potash mining influence and urban sewage influence. To confirm this, we have available analysis of diffused and widespread point sources identified in the area: springs, potash mining lixiviates, sewage, and fertilisers. Each one of these sources shows a clear link with one of the extreme samples, except fertilisers due to the heterogeneity of their composition. This approach is a useful tool to distinguish end-members, and characterise them, an issue generally difficult to solve. It is worth note that the end-member composition cannot be fully estimated but only characterised through log-ratio relationships among components. Moreover, the influence of each endmember in a given sample must be evaluated in relative terms of the other samples. These limitations are intrinsic to the relative nature of compositional data
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition