12 resultados para Lognormal kriging

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging.When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positivevariables, has no straightforward way to build consistent and optimal confidence intervals for estimates.These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El principal objectiu d'aquest treball és proporcionar una metodologia per a reduir el temps de càlcul del mètode d'interpolació kriging sense pèrdua de la qualitat del model resultat. La solució adoptada ha estat la paral·lelització de l'algorisme mitjançant MPI sobre llenguatge C. Prèviament ha estat necessari automatitzar l'ajust del variograma que millor s'adapta a la distribució espacial de la variable d'estudi. Els resultats experimentals demostren la validesa de la solució implementada, en reduir de forma significativa els temps d'execució final de tot el procés.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several estimators of the expectation, median and mode of the lognormal distribution are derived. They aim to be approximately unbiased, efficient, or have a minimax property in the class of estimators we introduce. The small-sample properties of these estimators are assessed by simulations and, when possible, analytically. Some of these estimators of the expectation are far more efficient than the maximum likelihood or the minimum-variance unbiased estimator, even for substantial samplesizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines a dataset which is modeled well by thePoisson-Log Normal process and by this process mixed with LogNormal data, which are both turned into compositions. Thisgenerates compositional data that has zeros without any need forconditional models or assuming that there is missing or censoreddata that needs adjustment. It also enables us to model dependenceon covariates and within the composition

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El nostre treball es centrarà en conèixer i aprendre les nocions bàsiques del mercat financer espanyol, primer; i aplicar uns coneixements per veure si es verifica unahipòtesi plantejada, després. La incògnita que volem resoldre és la següent: comprovarsi tots els supòsits i resultats que faciliten els models teòrics emprats en l’estudi dels mercats financers a l’hora de la veritat es compleixen.D’entre els múltiples conceptes que ens proporcionen els estudis de mercatsfinancers ens centrarem sobretot en el model de Black-Scholes i els somriures devolatilitat per desenvolupar el nostre treball. Després de cercar les dades necessàries a través de la web del M.E.F.F., entrevistar-nos amb professionals del sector i fer un seguiment d’aproximadament dos mesos dels moviments de les opcions sobre l’Índex Mini-Íbex 35, amb l’ajuda d’un programa informàtic en llenguatge C, hem calculat les corbes de volatilitat de les opcions sobre l’Índex Mini-Íbex 35.Les conclusions més importants que hem extret són que el Model de Black-Scholes, malgrat va revolucionar el món dels mercats financers, està basat en 2 supòsits que no es compleixen a la realitat: la distribució lognormal del preu de les accions i unavolatilitat constant. Tal i com hem pogut comprovar, la corba de volatilitat de lesopcions sobre l’Índex Mini-Íbex 35 és decreixent amb el preu d’exercici i laMoneyness, tal i com sostenen les teories dels somriures de volatilitat; per tant, no és constant. A més, hem comprovat que a mesura que s’apropa el venciment d’una opció,el preu acordat de l’actiu subjacent a l’opció s’apropa al preu de mercat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we investigate the goodness of fit of the Kirk's approximation formula for spread option prices in the correlated lognormal framework. Towards this end, we use the Malliavin calculus techniques to find an expression for the short-time implied volatility skew of options with random strikes. In particular, we obtain that this skew is very pronounced in the case of spread options with extremely high correlations, which cannot be reproduced by a constant volatility approximation as in the Kirk's formula. This fact agrees with the empirical evidence. Numerical examples are given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study European options on the ratio of the stock price to its averageand viceversa. Some of these options are traded in the Australian StockExchange since 1992, thus we call them Australian Asian options. Forgeometric averages, we obtain closed-form expressions for option prices.For arithmetic means, we use dierent approximations that produce verysimilar results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radiative heat exchange at the nanoscale presents a challenge for several areas due to its scope and nature. Here, we provide a thermokinetic description of microscale radiative energy transfer including phonon-photon coupling manifested through a non-Debye relaxation behavior. We show that a lognormal-like distribution of modes of relaxation accounts for this non-Debye relaxation behavior leading to the thermal conductance. We also discuss the validity of the fluctuation-dissipation theorem. The general expression for the thermal conductance we obtain fits existing experimental results with remarkable accuracy. Accordingly, our approach offers an overall explanation of radiative energy transfer through micrometric gaps regardless of geometrical configurations and distances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated