991 resultados para Aigües residuals -- Depuració -- Catalunya -- Torelló
Resumo:
Evolution of compositions in time, space, temperature or other covariates is frequent in practice. For instance, the radioactive decomposition of a sample changes its composition with time. Some of the involved isotopes decompose into other isotopes of the sample, thus producing a transfer of mass from some components to other ones, but preserving the total mass present in the system. This evolution is traditionally modelled as a system of ordinary di erential equations of the mass of each component. However, this kind of evolution can be decomposed into a compositional change, expressed in terms of simplicial derivatives, and a mass evolution (constant in this example). A rst result is that the simplicial system of di erential equations is non-linear, despite of some subcompositions behaving linearly. The goal is to study the characteristics of such simplicial systems of di erential equa- tions such as linearity and stability. This is performed extracting the compositional dif ferential equations from the mass equations. Then, simplicial derivatives are expressed in coordinates of the simplex, thus reducing the problem to the standard theory of systems of di erential equations, including stability. The characterisation of stability of these non-linear systems relays on the linearisation of the system of di erential equations at the stationary point, if any. The eigenvelues of the linearised matrix and the associated behaviour of the orbits are the main tools. For a three component system, these orbits can be plotted both in coordinates of the simplex or in a ternary diagram. A characterisation of processes with transfer of mass in closed systems in terms of stability is thus concluded. Two examples are presented for illustration, one of them is a radioactive decay
Resumo:
The Dirichlet family owes its privileged status within simplex distributions to easyness of interpretation and good mathematical properties. In particular, we recall fundamental properties for the analysis of compositional data such as closure under amalgamation and subcomposition. From a probabilistic point of view, it is characterised (uniquely) by a variety of independence relationships which makes it indisputably the reference model for expressing the non trivial idea of substantial independence for compositions. Indeed, its well known inadequacy as a general model for compositional data stems from such an independence structure together with the poorness of its parametrisation. In this paper a new class of distributions (called Flexible Dirichlet) capable of handling various dependence structures and containing the Dirichlet as a special case is presented. The new model exhibits a considerably richer parametrisation which, for example, allows to model the means and (part of) the variance-covariance matrix separately. Moreover, such a model preserves some good mathematical properties of the Dirichlet, i.e. closure under amalgamation and subcomposition with new parameters simply related to the parent composition parameters. Furthermore, the joint and conditional distributions of subcompositions and relative totals can be expressed as simple mixtures of two Flexible Dirichlet distributions. The basis generating the Flexible Dirichlet, though keeping compositional invariance, shows a dependence structure which allows various forms of partitional dependence to be contemplated by the model (e.g. non-neutrality, subcompositional dependence and subcompositional non-invariance), independence cases being identified by suitable parameter configurations. In particular, within this model substantial independence among subsets of components of the composition naturally occurs when the subsets have a Dirichlet distribution
Resumo:
Sediment composition is mainly controlled by the nature of the source rock(s), and chemical (weathering) and physical processes (mechanical crushing, abrasion, hydrodynamic sorting) during alteration and transport. Although the factors controlling these processes are conceptually well understood, detailed quantification of compositional changes induced by a single process are rare, as are examples where the effects of several processes can be distinguished. The present study was designed to characterize the role of mechanical crushing and sorting in the absence of chemical weathering. Twenty sediment samples were taken from Alpine glaciers that erode almost pure granitoid lithologies. For each sample, 11 grain-size fractions from granules to clay (ø grades <-1 to >9) were separated, and each fraction was analysed for its chemical composition. The presence of clear steps in the box-plots of all parts (in adequate ilr and clr scales) against ø is assumed to be explained by typical crystal size ranges for the relevant mineral phases. These scatter plots and the biplot suggest a splitting of the full grain size range into three groups: coarser than ø=4 (comparatively rich in SiO2, Na2O, K2O, Al2O3, and dominated by “felsic” minerals like quartz and feldspar), finer than ø=8 (comparatively rich in TiO2, MnO, MgO, Fe2O3, mostly related to “mafic” sheet silicates like biotite and chlorite), and intermediate grains sizes (4≤ø <8; comparatively rich in P2O5 and CaO, related to apatite, some feldspar). To further test the absence of chemical weathering, the observed compositions were regressed against three explanatory variables: a trend on grain size in ø scale, a step function for ø≥4, and another for ø≥8. The original hypothesis was that the trend could be identified with weathering effects, whereas each step function would highlight those minerals with biggest characteristic size at its lower end. Results suggest that this assumption is reasonable for the step function, but that besides weathering some other factors (different mechanical behavior of minerals) have also an important contribution to the trend. Key words: sediment, geochemistry, grain size, regression, step function
Resumo:
In CoDaWork’05, we presented an application of discriminant function analysis (DFA) to 4 different compositional datasets and modelled the first canonical variable using a segmented regression model solely based on an observation about the scatter plots. In this paper, multiple linear regressions are applied to different datasets to confirm the validity of our proposed model. In addition to dating the unknown tephras by calibration as discussed previously, another method of mapping the unknown tephras into samples of the reference set or missing samples in between consecutive reference samples is proposed. The application of these methodologies is demonstrated with both simulated and real datasets. This new proposed methodology provides an alternative, more acceptable approach for geologists as their focus is on mapping the unknown tephra with relevant eruptive events rather than estimating the age of unknown tephra. Kew words: Tephrochronology; Segmented regression
Resumo:
The identification of compositional changes in fumarolic gases of active and quiescent volcanoes is one of the most important targets in monitoring programs. From a general point of view, many systematic (often cyclic) and random processes control the chemistry of gas discharges, making difficult to produce a convincing mathematical-statistical modelling. Changes in the chemical composition of volcanic gases sampled at Vulcano Island (Aeolian Arc, Sicily, Italy) from eight different fumaroles located in the northern sector of the summit crater (La Fossa) have been analysed by considering their dependence from time in the period 2000-2007. Each intermediate chemical composition has been considered as potentially derived from the contribution of the two temporal extremes represented by the 2000 and 2007 samples, respectively, by using inverse modelling methodologies for compositional data. Data pertaining to fumaroles F5 and F27, located on the rim and in the inner part of La Fossa crater, respectively, have been used to achieve the proposed aim. The statistical approach has allowed us to highlight the presence of random and not random fluctuations, features useful to understand how the volcanic system works, opening new perspectives in sampling strategies and in the evaluation of the natural risk related to a quiescent volcano
Resumo:
Geochemical data that is derived from the whole or partial analysis of various geologic materials represent a composition of mineralogies or solute species. Minerals are composed of structured relationships between cations and anions which, through atomic and molecular forces, keep the elements bound in specific configurations. The chemical compositions of minerals have specific relationships that are governed by these molecular controls. In the case of olivine, there is a well-defined relationship between Mn-Fe-Mg with Si. Balances between the principal elements defining olivine composition and other significant constituents in the composition (Al, Ti) have been defined, resulting in a near-linear relationship between the logarithmic relative proportion of Si versus (MgMnFe) and Mg versus (MnFe), which is typically described but poorly illustrated in the simplex. The present contribution corresponds to ongoing research, which attempts to relate stoichiometry and geochemical data using compositional geometry. We describe here the approach by which stoichiometric relationships based on mineralogical constraints can be accounted for in the space of simplicial coordinates using olivines as an example. Further examples for other mineral types (plagioclases and more complex minerals such as clays) are needed. Issues that remain to be dealt with include the reduction of a bulk chemical composition of a rock comprised of several minerals from which appropriate balances can be used to describe the composition in a realistic mineralogical framework. The overall objective of our research is to answer the question: In the cases where the mineralogy is unknown, are there suitable proxies that can be substituted? Kew words: Aitchison geometry, balances, mineral composition, oxides
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the Rodrigues Triple Junction in the Indian Ocean were studied applying classical statistical methods (fuzzy c-means clustering, linear mixing model, principal component analysis) for the extraction of endmembers and evaluating the spatial and temporal variation of geochemical signals. Three main factors of sedimentation were expected by the marine geologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. The display of fuzzy membership values and/or factor scores versus depth provided consistent results for two factors only; the ultra-basic component could not be identified. The reason for this may be that only traditional statistical methods were applied, i.e. the untransformed components were used and the cosine-theta coefficient as similarity measure. During the last decade considerable progress in compositional data analysis was made and many case studies were published using new tools for exploratory analysis of these data. Therefore it makes sense to check if the application of suitable data transformations, reduction of the D-part simplex to two or three factors and visual interpretation of the factor scores would lead to a revision of earlier results and to answers to open questions . In this paper we follow the lines of a paper of R. Tolosana- Delgado et al. (2005) starting with a problem-oriented interpretation of the biplot scattergram, extracting compositional factors, ilr-transformation of the components and visualization of the factor scores in a spatial context: The compositional factors will be plotted versus depth (time) of the core samples in order to facilitate the identification of the expected sources of the sedimentary process. Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
The aim of this talk is to convince the reader that there are a lot of interesting statistical problems in presentday life science data analysis which seem ultimately connected with compositional statistics. Key words: SAGE, cDNA microarrays, (1D-)NMR, virus quasispecies
Resumo:
Pounamu (NZ jade), or nephrite, is a protected mineral in its natural form following the transfer of ownership back to Ngai Tahu under the Ngai Tahu (Pounamu Vesting) Act 1997. Any theft of nephrite is prosecutable under the Crimes Act 1961. Scientific evidence is essential in cases where origin is disputed. A robust method for discrimination of this material through the use of elemental analysis and compositional data analysis is required. Initial studies have characterised the variability within a given nephrite source. This has included investigation of both in situ outcrops and alluvial material. Methods for the discrimination of two geographically close nephrite sources are being developed. Key Words: forensic, jade, nephrite, laser ablation, inductively coupled plasma mass spectrometry, multivariate analysis, elemental analysis, compositional data analysis
Resumo:
In 2000 the European Statistical Office published the guidelines for developing the Harmonized European Time Use Surveys system. Under such a unified framework, the first Time Use Survey of national scope was conducted in Spain during 2002– 03. The aim of these surveys is to understand human behavior and the lifestyle of people. Time allocation data are of compositional nature in origin, that is, they are subject to non-negativity and constant-sum constraints. Thus, standard multivariate techniques cannot be directly applied to analyze them. The goal of this work is to identify homogeneous Spanish Autonomous Communities with regard to the typical activity pattern of their respective populations. To this end, fuzzy clustering approach is followed. Rather than the hard partitioning of classical clustering, where objects are allocated to only a single group, fuzzy method identify overlapping groups of objects by allowing them to belong to more than one group. Concretely, the probabilistic fuzzy c-means algorithm is conveniently adapted to deal with the Spanish Time Use Survey microdata. As a result, a map distinguishing Autonomous Communities with similar activity pattern is drawn. Key words: Time use data, Fuzzy clustering; FCM; simplex space; Aitchison distance
Resumo:
In order to obtain a high-resolution Pleistocene stratigraphy, eleven continuously cored boreholes, 100 to 220m deep were drilled in the northern part of the Po Plain by Regione Lombardia in the last five years. Quantitative provenance analysis (QPA, Weltje and von Eynatten, 2004) of Pleistocene sands was carried out by using multivariate statistical analysis (principal component analysis, PCA, and similarity analysis) on an integrated data set, including high-resolution bulk petrography and heavy-mineral analyses on Pleistocene sands and of 250 major and minor modern rivers draining the southern flank of the Alps from West to East (Garzanti et al, 2004; 2006). Prior to the onset of major Alpine glaciations, metamorphic and quartzofeldspathic detritus from the Western and Central Alps was carried from the axial belt to the Po basin longitudinally parallel to the SouthAlpine belt by a trunk river (Vezzoli and Garzanti, 2008). This scenario rapidly changed during the marine isotope stage 22 (0.87 Ma), with the onset of the first major Pleistocene glaciation in the Alps (Muttoni et al, 2003). PCA and similarity analysis from core samples show that the longitudinal trunk river at this time was shifted southward by the rapid southward and westward progradation of transverse alluvial river systems fed from the Central and Southern Alps. Sediments were transported southward by braided river systems as well as glacial sediments transported by Alpine valley glaciers invaded the alluvial plain. Kew words: Detrital modes; Modern sands; Provenance; Principal Components Analysis; Similarity, Canberra Distance; palaeodrainage
Resumo:
Emergent molecular measurement methods, such as DNA microarray, qRTPCR, and many others, offer tremendous promise for the personalized treatment of cancer. These technologies measure the amount of specific proteins, RNA, DNA or other molecular targets from tumor specimens with the goal of “fingerprinting” individual cancers. Tumor specimens are heterogeneous; an individual specimen typically contains unknown amounts of multiple tissues types. Thus, the measured molecular concentrations result from an unknown mixture of tissue types, and must be normalized to account for the composition of the mixture. For example, a breast tumor biopsy may contain normal, dysplastic and cancerous epithelial cells, as well as stromal components (fatty and connective tissue) and blood and lymphatic vessels. Our diagnostic interest focuses solely on the dysplastic and cancerous epithelial cells. The remaining tissue components serve to “contaminate” the signal of interest. The proportion of each of the tissue components changes as a function of patient characteristics (e.g., age), and varies spatially across the tumor region. Because each of the tissue components produces a different molecular signature, and the amount of each tissue type is specimen dependent, we must estimate the tissue composition of the specimen, and adjust the molecular signal for this composition. Using the idea of a chemical mass balance, we consider the total measured concentrations to be a weighted sum of the individual tissue signatures, where weights are determined by the relative amounts of the different tissue types. We develop a compositional source apportionment model to estimate the relative amounts of tissue components in a tumor specimen. We then use these estimates to infer the tissuespecific concentrations of key molecular targets for sub-typing individual tumors. We anticipate these specific measurements will greatly improve our ability to discriminate between different classes of tumors, and allow more precise matching of each patient to the appropriate treatment
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certain assumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur in the proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p. There are many statistical tests being used to check whether empirical marker data obeys the Hardy-Weinberg principle. Among these are the classical xi-square test (with or without continuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combination with Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE) are numerical in nature, requiring the computation of a test statistic and a p-value. There is however, ample space for the use of graphics in HWE tests, in particular for the ternary plot. Nowadays, many genetical studies are using genetical markers known as Single Nucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the counts one typically computes genotype frequencies and allele frequencies. These frequencies satisfy the unit-sum constraint, and their analysis therefore falls within the realm of compositional data analysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotype frequencies can be adequately represented in a ternary plot. Compositions that are in exact HWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected in a statistical test are typically “close" to the parabola, whereas compositions that differ significantly from HWE are “far". By rewriting the statistics used to test for HWE in terms of heterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted in the ternary plot. This way, compositions can be tested for HWE purely on the basis of their position in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphical representations where large numbers of SNPs can be tested for HWE in a single graph. Several examples of graphical tests for HWE (implemented in R software), will be shown, using SNP data from different human populations
Resumo:
The amalgamation operation is frequently used to reduce the number of parts of compositional data but it is a non-linear operation in the simplex with the usual geometry, the Aitchison geometry. The concept of balances between groups, a particular coordinate system designed over binary partitions of the parts, could be an alternative to the amalgamation in some cases. In this work we discuss the proper application of both concepts using a real data set corresponding to behavioral measures of pregnant sows
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult to achieve because the relative values of the forecast components often fail to behave in a way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It has been shown that cause-specic mortality forecasts are pessimistic when compared with all-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approach of using log mortality rates and forecasts the density of deaths in the life table. Since these values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbing state), they are intrinsically relative rather than absolute values across decrements as well as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison (1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that the unit sum constraint is honoured. The structure of the best-known, single-decrement mortality-rate forecasting model, devised by Lee and Carter (1992), is expressed in compositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortality by cause of death for Japan