991 resultados para Conills -- Població -- Catalunya -- Alinyà
Resumo:
Starting with logratio biplots for compositional data, which are based on the principle of subcompositional coherence, and then adding weights, as in correspondence analysis, we rediscover Lewi's spectral map and many connections to analyses of two-way tables of non-negative data. Thanks to the weighting, the method also achieves the property of distributional equivalence
Resumo:
The algebraic-geometric structure of the simplex, known as Aitchison geometry, is used to look at the Dirichlet family of distributions from a new perspective. A classical Dirichlet density function is expressed with respect to the Lebesgue measure on real space. We propose here to change this measure by the Aitchison measure on the simplex, and study some properties and characteristic measures of the resulting density
Resumo:
Examples of compositional data. The simplex, a suitable sample space for compositional data and Aitchison's geometry. R, a free language and environment for statistical computing and graphics
Resumo:
All of the imputation techniques usually applied for replacing values below the detection limit in compositional data sets have adverse effects on the variability. In this work we propose a modification of the EM algorithm that is applied using the additive log-ratio transformation. This new strategy is applied to a compositional data set and the results are compared with the usual imputation techniques
Resumo:
In the eighties, John Aitchison (1986) developed a new methodological approach for the statistical analysis of compositional data. This new methodology was implemented in Basic routines grouped under the name CODA and later NEWCODA inMatlab (Aitchison, 1997). After that, several other authors have published extensions to this methodology: Marín-Fernández and others (2000), Barceló-Vidal and others (2001), Pawlowsky-Glahn and Egozcue (2001, 2002) and Egozcue and others (2003). (...)
Resumo:
Estudio de la Asociaci??n obrera (fomento de las Artes) en el per??odo 1847-1912, se??alando sus actividades y realizaciones en el campo educativo, cuya meta era el mejoramiento moral y material de sus asociados. 1. Introducci??n dedicada al examen de la situaci??n econ??mica, social, ideol??gica y educativa del pa??s; estructura de clases de la poblaci??n y an??lisis de las condiciones de vida de la clase obrera. 2. Estudio de los antecedentes y fundaci??n del fomento de las Artes y otras asociaciones instructivas madrile??as. 3. Etapas hist??ricas, realizaciones y actividades sociales y de otros tipos desarrolladas por las diferentes Juntas directivas. Fuentes: ??rgano de prensa interior de esta Sociedad depositado en la Biblioteca Nacional y en la Hemeroteca Municipal de Madrid. Documentaci??n del Archivo Hist??rico de la Villa de Madrid, del Archivo General de la Administraci??n en Alcal?? de Henares y de la Biblioteca del Ateneo de Madrid. Entre 1847 y 1912 unos 40000 obreros y familiares de los mismos pasaron por las aulas de esta Sociedad, debido, en parte, al escaso coste de la matr??cula. Colabor?? en la renovaci??n constante y continua de las materias necesarias para la formaci??n cultural de los obreros que iban. En Granada y Alicante surgen sociedades similares a ??sta. Organiz?? los dos Congresos pedag??gicos m??s importantes del siglo pasado: 1882-1892 y un Congreso de Sociedades de educaci??n popular. Contaba con un gabinete de lectura, biblioteca y organizaba conferencias sobre temas de actualidad para concienciar a los asociados de la necesidad de conocer el pa??s en que resid??an; conclusi??n, la visi??n global que se extrae de este trabajo es la colaboraci??n del fomento de las Artes a la ilustraci??n y difusi??n de la educaci??n popular entre los sectores menos favorecidos de la sociedad madrile??a. Gracias a esta sociedad y a otras similares se logr?? rebajar las alarmantes tasas de analfabetismo existentes en nuestro pa??s. En este trabajo se pone de manifiesto que iniciativas semejantes puedan tener actualidad en algunas zonas de nuestro pa??s.
Resumo:
Compositional data naturally arises from the scientific analysis of the chemical composition of archaeological material such as ceramic and glass artefacts. Data of this type can be explored using a variety of techniques, from standard multivariate methods such as principal components analysis and cluster analysis, to methods based upon the use of log-ratios. The general aim is to identify groups of chemically similar artefacts that could potentially be used to answer questions of provenance. This paper will demonstrate work in progress on the development of a documented library of methods, implemented using the statistical package R, for the analysis of compositional data. R is an open source package that makes available very powerful statistical facilities at no cost. We aim to show how, with the aid of statistical software such as R, traditional exploratory multivariate analysis can easily be used alongside, or in combination with, specialist techniques of compositional data analysis. The library has been developed from a core of basic R functionality, together with purpose-written routines arising from our own research (for example that reported at CoDaWork'03). In addition, we have included other appropriate publicly available techniques and libraries that have been implemented in R by other authors. Available functions range from standard multivariate techniques through to various approaches to log-ratio analysis and zero replacement. We also discuss and demonstrate a small selection of relatively new techniques that have hitherto been little-used in archaeometric applications involving compositional data. The application of the library to the analysis of data arising in archaeometry will be demonstrated; results from different analyses will be compared; and the utility of the various methods discussed
Resumo:
”compositions” is a new R-package for the analysis of compositional and positive data. It contains four classes corresponding to the four different types of compositional and positive geometry (including the Aitchison geometry). It provides means for computation, plotting and high-level multivariate statistical analysis in all four geometries. These geometries are treated in an fully analogous way, based on the principle of working in coordinates, and the object-oriented programming paradigm of R. In this way, called functions automatically select the most appropriate type of analysis as a function of the geometry. The graphical capabilities include ternary diagrams and tetrahedrons, various compositional plots (boxplots, barplots, piecharts) and extensive graphical tools for principal components. Afterwards, ortion and proportion lines, straight lines and ellipses in all geometries can be added to plots. The package is accompanied by a hands-on-introduction, documentation for every function, demos of the graphical capabilities and plenty of usage examples. It allows direct and parallel computation in all four vector spaces and provides the beginner with a copy-and-paste style of data analysis, while letting advanced users keep the functionality and customizability they demand of R, as well as all necessary tools to add own analysis routines. A complete example is included in the appendix
Resumo:
En la parte I se pretende constatar los resultados de la Prueba de Acceso para mayores de 25 a??os. Realizar un seguimiento de los matriculados en los distintos centros universitarios y rastreo de sus resultados acad??micos. Parte II: conocimiento de las caracter??sticas m??s relevantes de los sujetos. En la parte I: todos los sujetos matriculados en la Prueba de Acceso para mayores de 25 a??os en el per??odo estudiado, 3728 sujetos. Parte II: se considera una poblaci??n de 395 matriculados en la Universidad, de los cuales cumplimentaron el cuestionario 131 sujetos (muestra). En la parte I se midieron las variables relacionadas con la Prueba de Acceso: matriculados, presentados, abandonos, aptos y las referentes a los resultados obtenidos en la Universidad (abandonos, traslados, titulados). En la parte II existen tres grandes grupos de variables: de identificaci??n, de selecci??n y motivaci??n hacia los estudios universitarios, de valoraci??n de la prueba, de decisi??n a la hora de elegir carrera, de valoraci??n de la Universidad y los estudios realizados, de autoconcepto y, por ??ltimo, las referentes a los factores que influyeron en los resultados acad??micos obtenidos. En la parte II se utiliz?? un cuestionario dise??ado para esta investigaci??n en el que se indagan cuestiones referentes a las variables se??aladas en el punto anterior. En la parte I se utilizaron las relaciones de matriculados, presentados y aptos en la prueba del Vicerrectorado de estudiantes. Expedientes de los alumnos que accedieron a trav??s de esta prueba localizados en los archivos de las Secretar??as de la universidad. Coeficientes de contingencia para ver la asociaci??n de algunas de las variables de estudio: motivos de estudio y carrera seleccionada, situaci??n laboral y motivos de estudio, nivel de preparaci??n de cara a la prueba y consideraciones acerca del grado de dificultad de la misma. S??lo un promedio del 18 por ciento de los sujetos presentados logran superar la Prueba de Acceso a la Universidad para mayores de 25 a??os. Entre estos sujetos la carrera m??s seleccionada a la hora de matricularse es la de Derecho, 67 por ciento la eligen. La extracci??n social de estas personas, dada por el nivel cultural y ocupacional de los padres, es de estratos bajos o medios. La mayor dificultad que encuentran a la hora de realizar estudios en la Universidad es la falta de tiempo para el estudio personal, manifestada por el 76 por ciento. La tasa de abandono de los estudios en este colectivo es alta un 60 por ciento. El porcentaje de los que logran terminar sus estudios es del 14 por ciento. Se constata: la necesidad de proceder a un an??lisis de la Prueba de Acceso, de la configuraci??n de los tribunales y de los criterios seguidos a la hora de fijar los ex??menes, de cara a adecuarlos a las exigencias de las posteriores ense??anzas universitarias. La necesidad de efectuar en otros distritos investigaciones de similares caracter??sticas para poder efectuar un estudio m??s completo y explicativo y, la posibilidad de un estudio de car??cter sociol??gico sobre este colectivo.
Resumo:
We shall call an n × p data matrix fully-compositional if the rows sum to a constant, and sub-compositional if the variables are a subset of a fully-compositional data set1. Such data occur widely in archaeometry, where it is common to determine the chemical composition of ceramic, glass, metal or other artefacts using techniques such as neutron activation analysis (NAA), inductively coupled plasma spectroscopy (ICPS), X-ray fluorescence analysis (XRF) etc. Interest often centres on whether there are distinct chemical groups within the data and whether, for example, these can be associated with different origins or manufacturing technologies
Resumo:
Presentation in CODAWORK'03, session 4: Applications to archeometry
Resumo:
Developments in the statistical analysis of compositional data over the last two decades have made possible a much deeper exploration of the nature of variability, and the possible processes associated with compositional data sets from many disciplines. In this paper we concentrate on geochemical data sets. First we explain how hypotheses of compositional variability may be formulated within the natural sample space, the unit simplex, including useful hypotheses of subcompositional discrimination and specific perturbational change. Then we develop through standard methodology, such as generalised likelihood ratio tests, statistical tools to allow the systematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require special construction. We comment on the use of graphical methods in compositional data analysis and on the ordination of specimens. The recent development of the concept of compositional processes is then explained together with the necessary tools for a staying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland. Finally we point out a number of unresolved problems in the statistical analysis of compositional processes
Resumo:
First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by a simplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able to generate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow defining monitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated
Resumo:
The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principal component analysis allow to model compositional changes compared with a reference point. The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging. When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positive variables, has no straightforward way to build consistent and optimal confidence intervals for estimates. These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure