85 resultados para Estadística matemática


Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The log-ratio methodology makes available powerful tools for analyzing compositional data. Nevertheless, the use of this methodology is only possible for those data sets without null values. Consequently, in those data sets where the zeros are present, a previous treatment becomes necessary. Last advances in the treatment of compositional zeros have been centered especially in the zeros of structural nature and in the rounded zeros. These tools do not contemplate the particular case of count compositional data sets with null values. In this work we deal with \count zeros" and we introduce a treatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichlet probability distribution as a prior and we estimate the posterior probabilities. Then we apply a multiplicative modi¯cation for the non-zero values. We present a case study where this new methodology is applied. Key words: count data, multiplicative replacement, composition, log-ratio analysis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The statistical analysis of compositional data should be treated using logratios of parts, which are difficult to use correctly in standard statistical packages. For this reason a freeware package, named CoDaPack was created. This software implements most of the basic statistical methods suitable for compositional data. In this paper we describe the new version of the package that now is called CoDaPack3D. It is developed in Visual Basic for applications (associated with Excel©), Visual Basic and Open GL, and it is oriented towards users with a minimum knowledge of computers with the aim at being simple and easy to use. This new version includes new graphical output in 2D and 3D. These outputs could be zoomed and, in 3D, rotated. Also a customization menu is included and outputs could be saved in jpeg format. Also this new version includes an interactive help and all dialog windows have been improved in order to facilitate its use. To use CoDaPack one has to access Excel© and introduce the data in a standard spreadsheet. These should be organized as a matrix where Excel© rows correspond to the observations and columns to the parts. The user executes macros that return numerical or graphical results. There are two kinds of numerical results: new variables and descriptive statistics, and both appear on the same sheet. Graphical output appears in independent windows. In the present version there are 8 menus, with a total of 38 submenus which, after some dialogue, directly call the corresponding macro. The dialogues ask the user to input variables and further parameters needed, as well as where to put these results. The web site http://ima.udg.es/CoDaPack contains this freeware package and only Microsoft Excel© under Microsoft Windows© is required to run the software. Kew words: Compositional data Analysis, Software

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simpson's paradox, also known as amalgamation or aggregation paradox, appears when dealing with proportions. Proportions are by construction parts of a whole, which can be interpreted as compositions assuming they only carry relative information. The Aitchison inner product space structure of the simplex, the sample space of compositions, explains the appearance of the paradox, given that amalgamation is a nonlinear operation within that structure. Here we propose to use balances, which are specific elements of this structure, to analyse situations where the paradox might appear. With the proposed approach we obtain that the centre of the tables analysed is a natural way to compare them, which avoids by construction the possibility of a paradox. Key words: Aitchison geometry, geometric mean, orthogonal projection

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A condition needed for testing nested hypotheses from a Bayesian viewpoint is that the prior for the alternative model concentrates mass around the small, or null, model. For testing independence in contingency tables, the intrinsic priors satisfy this requirement. Further, the degree of concentration of the priors is controlled by a discrete parameter m, the training sample size, which plays an important role in the resulting answer regardless of the sample size. In this paper we study robustness of the tests of independence in contingency tables with respect to the intrinsic priors with different degree of concentration around the null, and compare with other “robust” results by Good and Crook. Consistency of the intrinsic Bayesian tests is established. We also discuss conditioning issues and sampling schemes, and argue that conditioning should be on either one margin or the table total, but not on both margins. Examples using real are simulated data are given

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel density estimation techniques in the context of compositional data analysis. Indeed, they gave two options for the choice of the kernel to be used in the kernel estimator. One of these kernels is based on the use the alr transformation on the simplex SD jointly with the normal distribution on RD-1. However, these authors themselves recognized that this method has some deficiencies. A method for overcoming these dificulties based on recent developments for compositional data analysis and multivariate kernel estimation theory, combining the ilr transformation with the use of the normal density with a full bandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu- Figueras (2006). Here we present an extensive simulation study that compares both methods in practice, thus exploring the finite-sample behaviour of both estimators

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this talk is to convince the reader that there are a lot of interesting statistical problems in presentday life science data analysis which seem ultimately connected with compositional statistics. Key words: SAGE, cDNA microarrays, (1D-)NMR, virus quasispecies

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quantitative estimation of Sea Surface Temperatures from fossils assemblages is a fundamental issue in palaeoclimatic and paleooceanographic investigations. The Modern Analogue Technique, a widely adopted method based on direct comparison of fossil assemblages with modern coretop samples, was revised with the aim of conforming it to compositional data analysis. The new CODAMAT method was developed by adopting the Aitchison metric as distance measure. Modern coretop datasets are characterised by a large amount of zeros. The zero replacement was carried out by adopting a Bayesian approach to the zero replacement, based on a posterior estimation of the parameter of the multinomial distribution. The number of modern analogues from which reconstructing the SST was determined by means of a multiple approach by considering the Proxies correlation matrix, Standardized Residual Sum of Squares and Mean Squared Distance. This new CODAMAT method was applied to the planktonic foraminiferal assemblages of a core recovered in the Tyrrhenian Sea. Kew words: Modern analogues, Aitchison distance, Proxies correlation matrix, Standardized Residual Sum of Squares

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Catalonia, according to the nitrate directive (91/676/EU), nine areas have been declared as vulnerable to nitrate pollution from agricultural sources (Decret 283/1998 and Decret 479/2004). Five of these areas have been studied coupling hydro chemical data with a multi-isotopic approach (Vitòria et al. 2005, Otero et al. 2007, Puig et al. 2007), in an ongoing research project looking for an integrated application of classical hydrochemistry data, with a comprehensive isotopic characterisation (δ15N and δ18O of dissolved nitrate, δ34S and δ18O of dissolved sulphate, δ13C of dissolved inorganic carbon, and δD and δ18O of water). Within this general frame, the contribution presented explores compositional ways of: (i) distinguish agrochemicals and manure N pollution, (ii) quantify natural attenuation of nitrate (denitrification), and identify possible controlling factors. To achieve this two-fold goal, the following techniques have been used. Separate biplots of each suite of data show that each studied region has a distinct δ34S and pH signatures, but they are homogeneous with regard to NO3- related variables. Also, the geochemical variables were projected onto the compositional directions associated with the possible denitrification reactions in each region. The resulting balances can be plot together with some isotopes, to assess their likelihood of occurrence

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In most psychological tests and questionnaires, a test score is obtained by taking the sum of the item scores. In virtually all cases where the test or questionnaire contains multidimensional forced-choice items, this traditional scoring method is also applied. We argue that the summation of scores obtained with multidimensional forced-choice items produces uninterpretable test scores. Therefore, we propose three alternative scoring methods: a weak and a strict rank preserving scoring method, which both allow an ordinal interpretation of test scores; and a ratio preserving scoring method, which allows a proportional interpretation of test scores. Each proposed scoring method yields an index for each respondent indicating the degree to which the response pattern is inconsistent. Analysis of real data showed that with respect to rank preservation, the weak and strict rank preserving method resulted in lower inconsistency indices than the traditional scoring method; with respect to ratio preservation, the ratio preserving scoring method resulted in lower inconsistency indices than the traditional scoring method

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many multivariate methods that are apparently distinct can be linked by introducing one or more parameters in their definition. Methods that can be linked in this way are correspondence analysis, unweighted or weighted logratio analysis (the latter also known as "spectral mapping"), nonsymmetric correspondence analysis, principal component analysis (with and without logarithmic transformation of the data) and multidimensional scaling. In this presentation I will show how several of these methods, which are frequently used in compositional data analysis, may be linked through parametrizations such as power transformations, linear transformations and convex linear combinations. Since the methods of interest here all lead to visual maps of data, a "movie" can be made where where the linking parameter is allowed to vary in small steps: the results are recalculated "frame by frame" and one can see the smooth change from one method to another. Several of these "movies" will be shown, giving a deeper insight into the similarities and differences between these methods

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Es tracta d’un material docent, amb què s’intenta facilitar als estudiants del camp de l’educació l’aprenentatge dels fonaments de l’estadística descriptiva i aplicada. Aquesta publicació vol ajudar a comprendre els conceptes bàsics per utilitzar-los amb seguretat en contextos de recerca i per entendre les dades d’estudis educatius i socials. El document està organitzat en cinc apartats i cada apartat conté una explicació sobre el concepte o conceptes que s’hi treballen, un o dos exemples d’exercicis desenvolupats, propostes d’exercicis i les respostes a aquests exercicis. Finalment, hi ha un sisè capítol amb exercicis de recopilació

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La plataforma ACME (Avaluació Continuada i Millora de l’Ensenyament), desenvolupada en el nostre departament, és una plataforma virtual de suport a la docència de les matemàtiques de l’EPS des de ja fa uns anys. Actualment es fa servir en totes les titulacions d’àmbit tecnològic i científic de la Universitat de Girona. L’esperit de la plataforma és el de funcionar com a quadern d’activitats de l’alumne

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En esta comunicación presentamos una experiencia de innovación docente de tutorización individualizada. Este proyecto se realizó en la asignatura de Introducción a la matemática Económica y Empresarial de la Diplomatura en Ciencias Empresariales de la Universidad de Barcelona durante el curso 2006-2007. Se trata de una asignatura semestral, de seis créditos y de libre elección que se imparte durante el primer semestre de cada año académico. Este proyecto se inscribe dentro de los nuevos enfoques docentes propuestos en el Espacio Europeo de Educación Superior. Esto supone la adopción de nuevos métodos pedagógicos que se enmarcan dentro de la docencia en el nuevo entorno educativo Nuestra experiencia docente en las asignaturas de Matemática aplicadas a las ciencias sociales de la Diplomatura en Ciencias Empresariales de la Universidad de Barcelona, nos ha permitido constatar que la participación de los alumnos en las tutorías presenciales es baja y que se produce un descenso en la asistencia a clase a medida que avanza el curso. Esto genera que los estudiantes abandonen la asignatura antes de finalizar el curso, no se presenten a las pruebas de evaluación y que su rendimiento académico sea insuficiente. Este contexto motivó un intento de rediseño del sistema de tutorías. Para ello, se construyó un espacio virtual de tutorización y se probó con una muestra aleatoria de 50 alumnos. Los resultados de la experiencia piloto de tutorías señalan una clara mejora de las tres situaciones anteriores. Actualmente, estamos analizando la viabilidad de este proyecto en grupos masificados