999 resultados para compositional processes
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositionaldata. Nevertheless, the use of this methodology is only possible for those data setswithout null values. Consequently, in those data sets where the zeros are present, aprevious treatment becomes necessary. Last advances in the treatment of compositionalzeros have been centered especially in the zeros of structural nature and in the roundedzeros. These tools do not contemplate the particular case of count compositional datasets with null values. In this work we deal with \count zeros" and we introduce atreatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichletprobability distribution as a prior and we estimate the posterior probabilities. Then weapply a multiplicative modi¯cation for the non-zero values. We present a case studywhere this new methodology is applied.Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel densityestimation techniques in the context of compositional data analysis. Indeed, they gavetwo options for the choice of the kernel to be used in the kernel estimator. One ofthese kernels is based on the use the alr transformation on the simplex SD jointly withthe normal distribution on RD-1. However, these authors themselves recognized thatthis method has some deficiencies. A method for overcoming these dificulties based onrecent developments for compositional data analysis and multivariate kernel estimationtheory, combining the ilr transformation with the use of the normal density with a fullbandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu-Figueras (2006). Here we present an extensive simulation study that compares bothmethods in practice, thus exploring the finite-sample behaviour of both estimators
Resumo:
The quantitative estimation of Sea Surface Temperatures from fossils assemblages is afundamental issue in palaeoclimatic and paleooceanographic investigations. TheModern Analogue Technique, a widely adopted method based on direct comparison offossil assemblages with modern coretop samples, was revised with the aim ofconforming it to compositional data analysis. The new CODAMAT method wasdeveloped by adopting the Aitchison metric as distance measure. Modern coretopdatasets are characterised by a large amount of zeros. The zero replacement was carriedout by adopting a Bayesian approach to the zero replacement, based on a posteriorestimation of the parameter of the multinomial distribution. The number of modernanalogues from which reconstructing the SST was determined by means of a multipleapproach by considering the Proxies correlation matrix, Standardized Residual Sum ofSquares and Mean Squared Distance. This new CODAMAT method was applied to theplanktonic foraminiferal assemblages of a core recovered in the Tyrrhenian Sea.Kew words: Modern analogues, Aitchison distance, Proxies correlation matrix,Standardized Residual Sum of Squares
Resumo:
In Catalonia, according to the nitrate directive (91/676/EU), nine areas have been declared as vulnerable to nitrate pollution from agricultural sources (Decret 283/1998 and Decret 479/2004). Five of these areas have been studied coupling hydro chemical data with a multi-isotopic approach (Vitòria et al. 2005, Otero et al. 2007, Puig et al. 2007), in an ongoing research project looking for an integrated application of classical hydrochemistry data, with a comprehensive isotopic characterisation (δ15N and δ18O of dissolved nitrate, δ34S and δ18O of dissolved sulphate, δ13C of dissolved inorganic carbon, and δD and δ18O of water). Within this general frame, the contribution presented explores compositional ways of: (i) distinguish agrochemicals and manure N pollution, (ii) quantify natural attenuation of nitrate (denitrification), and identify possible controlling factors.To achieve this two-fold goal, the following techniques have been used. Separate biplots of each suite of data show that each studied region has a distinct δ34S and pH signatures, but they are homogeneous with regard to NO3- related variables. Also, the geochemical variables were projected onto the compositional directions associated with the possible denitrification reactions in each region. The resulting balances can be plot together with some isotopes, to assess their likelihood of occurrence
Resumo:
Geochemical data that is derived from the whole or partial analysis of various geologic materialsrepresent a composition of mineralogies or solute species. Minerals are composed of structuredrelationships between cations and anions which, through atomic and molecular forces, keep the elementsbound in specific configurations. The chemical compositions of minerals have specific relationships thatare governed by these molecular controls. In the case of olivine, there is a well-defined relationshipbetween Mn-Fe-Mg with Si. Balances between the principal elements defining olivine composition andother significant constituents in the composition (Al, Ti) have been defined, resulting in a near-linearrelationship between the logarithmic relative proportion of Si versus (MgMnFe) and Mg versus (MnFe),which is typically described but poorly illustrated in the simplex.The present contribution corresponds to ongoing research, which attempts to relate stoichiometry andgeochemical data using compositional geometry. We describe here the approach by which stoichiometricrelationships based on mineralogical constraints can be accounted for in the space of simplicialcoordinates using olivines as an example. Further examples for other mineral types (plagioclases andmore complex minerals such as clays) are needed. Issues that remain to be dealt with include thereduction of a bulk chemical composition of a rock comprised of several minerals from which appropriatebalances can be used to describe the composition in a realistic mineralogical framework. The overallobjective of our research is to answer the question: In the cases where the mineralogy is unknown, arethere suitable proxies that can be substituted?Kew words: Aitchison geometry, balances, mineral composition, oxides
Resumo:
Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
This file contains the ontology of patterns of educational settings, as part of the formal framework for specifying, reusing and implementing educational settings. Furthermore, it includes the set of rules that extend the ontology of educational scenarios as well as a brief description of the level of patters of such ontological framework.
Resumo:
First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by asimplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able togenerate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow definingmonitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated
Resumo:
During more than 20 years organisations like Gesto por la Paz and Lokarri had been trying to change the social approach to violence, instilling values of peace and dialogue. This working paper defends the idea that the work of these two organisations is key to understand the end of ETA violence and the lack of support that political violence has in the Basque Country. It develops the Basque peace frame generated by this movement and explains how this frame is present in the different levels of Basque society, changing the way political collective identities are negotiated in the Basque Country. Ultimately, their effort is to propose another way of doing politics, one where nationalism and violence are not intrinsically united, escaping from the polarization and confrontation that were in place during the 80s-90s.
Resumo:
Chronic periaortitis (CP) is an uncommon inflammatory disease which primarily involves the infrarenal portion of the abdominal aorta. However, CP should be regarded as a generalized disease with three different pathophysiological entities, namely idiopathic retroperitoneal fibrosis (RPF), inflammatory abdominal aortic aneurysm and perianeurysmal RPF. These entities share similar histopathological characteristics and finally will lead to fibrosis of the retroperitoneal space. Beside fibrosis, an infiltrate with variable chronic inflammatory cell is present. The majority of these cells are lymphocytes and macrophages as well as vascular endothelial cells, most of which are HLA-DR-positive. B and T cells are present with a majority of T cells of the T-helper phenotype. Cytokine gene expression analysis shows the presence of interleukin (IL)-1alpha, IL-2, IL-4, interferon-gamma and IL-2 receptors. Adhesion molecules such as E-selectin, intercellular adhesion molecule-1 and the vascular cell adhesion molecule-1 were also found in aortic tissue, and may play a significant role in CP pathophysiology. Although CP pathogenesis remains unknown, an exaggerated inflammatory response to advanced atherosclerosis (ATS) has been postulated to be the main process. Autoimmunity has also been proposed as a contributing factor based on immunohistochemical studies. The suspected allergen may be a component of ceroid, which is elaborated within the atheroma. We review the pathogenesis and the pathophysiology of CP, and its potential links with ATS. Clinically relevant issues are summarized in each section with regard to the current working hypothesis of this complex inflammatory disease.
Resumo:
We present the derivation of the continuous-time equations governing the limit dynamics of discrete-time reaction-diffusion processes defined on heterogeneous metapopulations. We show that, when a rigorous time limit is performed, the lack of an epidemic threshold in the spread of infections is not limited to metapopulations with a scale-free architecture, as it has been predicted from dynamical equations in which reaction and diffusion occur sequentially in time
Resumo:
Evolutionary processes acting at the expanding margins of a species' range are still poorly understood. Genetic drift is considered prevalent in marginal populations, and the maintenance of genetic diversity during recolonization might seem puzzling. To investigate such processes, a fine-scale investigation of 219 individuals was performed within a population of Biscutella laevigata (Brassicaceae), located at the leading edge of its range. The survey used amplified fragment length polymorphisms (AFLPs). As commonly reported across the whole species distribution range, individual density and genetic diversity decreased along the local axis of recolonization of this expanding population, highlighting the enduring effect of the historical colonization on present-day diversity. The self-incompatibility system of the plant may have prevented local inbreeding in newly found patches and sustained genetic diversity by ensuring gene flow from established populations. Within the more continuously populated region, spatial analysis of genetic structure revealed restricted gene flow among individuals. The distribution of genotypes formed a mosaic of relatively homogenous patches within the continuous population. This pattern could be explained by a history of expansion by long-distance dispersal followed by fine-scale diffusion (that is, a stratified dispersal combination). The secondary contact among expanding patches apparently led to admixture among differentiated genotypes where they met (that is, a reshuffling effect). This type of dynamics could explain the maintenance of genetic diversity during recolonization.
Exact asymptotics and limit theorems for supremum of stationary chi-processes over a random interval