969 resultados para on-disk data layout


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well accepted that structural studies with model membranes are of considerable value in understanding the structure of biological membranes. Many studies with models of pure phospholipids have been done; but the effects of divalent cations and protein on these models would make these studies more applicable to intact membrane. The present study, performed with above view, is a structural analysis of divalent io~cardio1ipin complexes using the technique of x-ray diffraction. Cardiolipin, precipitated from dilute solution by divalent ionscalcium, magnesium and barium, contains little water and the structure formed is similar to the structure of pure cardiolipin with low water content. The calcium-cardiolipin complex forms a pure hexagonal type II phase that exists from 40 to 400 C. The molar ratio of calcium and cardiolipin in the complex is 1 : 1. Cardiolipin, precipitated with magnesium and barium forms two co-existing phases, lamellar and hexagonal, the relative quantity of the two phases being dependent on temperature. The hexagonal phase type II consisting of water filled channels formed by adding calcium to cardiolipin may have a remarkable permeability property in intact membrane. Pure cardiolipin and insulin at pH 3.0 and 4.0 precipitate but form no organised structure. Lecithin/cardiolipin and insulin precipitated at pH 3.0 give a pure lamellar phase. As the lecithin/cardiolipin molar ratio changes from 93/7 to SO/50, (a) the repeat distance of the lamellar changes from 72.8 X to 68.2 A; (b) the amount of protein bound increases in such a way that cardiolipin/insulin molar ratio in the complex reaches a maximum constant value at lecithin/cardiolipin molar ratio 70/30. A structural model based on these data shows that the molecular arrangement of lipid and protein is a lipid bilayer coated with protein molecules. The lipid-protein interaction is chiefly electrostatic and little, if any, hydrophobic bonding occurs in this particular system. So, the proposed model is essentially the same as Davson-Daniellifs model of biological membrane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project examines students in a private school in southwestern Ontario on a 17 -day Costa Rica Outward Bound Rainforest multielement course. The study attempted to discover whether voluntary teenage participants could increase their self-perceptions of life effectiveness by participating in a 17-day expedition. A total of9 students participated in the study. The experimental design that was implemented was a mixed methods design. Participants filled in a Life Effectiveness Questionnaire (LEQ) at four predesignated times during the study. These time intervals occurred (a) before the trip commenced, (b) the first day of the trip, ( c) the last day of the trip, and (d) 1 month after the trip ended. Fieldnotes and recordings from informal group debriefing sessions were also used to gather information. Data collected in this study were analyzed in a variety of ways by the researcher. Analyses that were run on the data included the Friedman test for covariance, means, medians, and the Wilcoxon Pairs Test. The questionnaires were analyzed quantitatively, and the fieldnotes were analyzed qualitatively. Nonparametric statistical analysis was implemented as a result of the small group size of participants. Both sets of data were grouped and discussed according to similarities and differences. The data indicate that voluntary teenage participants experience significant changes over time in the areas of time management, social competency, emotional control, active initiative, and self-confidence. The types of outcomes from this study illustrate that Outward Bound-type opportunities should be offered to teenagers in Ontario schools as a means to bring about self-development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse concerne l’étude de phase de séparation de deux polymères thermosensibles connus-poly(N-isopropylacylamide) (PNIPAM) et poly(2-isopropyl-2-oxazoline) (PIPOZ). Parmi des études variées sur ces deux polymères, il y a encore deux parties de leurs propriétés thermiques inexplicites à être étudiées. Une partie concerne l’effet de consolvant de PNIPAM dans l’eau et un autre solvant hydromiscible. L’autre est l’effet de propriétés de groupes terminaux de chaînes sur la séparation de phase de PIPOZ. Pour ce faire, nous avons d’abord étudié l’effet de l’architecture de chaînes sur l’effet de cosolvant de PNIPAMs dans le mélange de méthanol/eau en utilisant un PNIPAM en étoile avec 4 branches et un PNIPAM cyclique comme modèles. Avec PNIPAM en étoile, l’adhérence de branches PNIPAM de à un cœur hydrophobique provoque une réduction de Tc (la température du point de turbidité) et une enthalpie plus faible de la transition de phase. En revanche, la Tc de PNIPAM en étoile dépend de la masse molaire de polymère. La coopérativité de déhydratation diminue pour PNIPAM en étoile et PNIPAM cyclique à cause de la limite topologique. Une étude sur l’influence de concentration en polymère sur l’effet de cosolvant de PNIPAM dans le mélange méthanol/eau a montré qu’une séparation de phase liquide-liquide macroscopique (MLLPS) a lieu pour une solution de PNIPAM dans le mélange méthanol/eau avec la fraction molaire de méthanol entre 0.127 et 0.421 et la concentration en PNIPAM est constante à 10 g.L-1. Après deux jours d’équilibration à température ambiante, la suspension turbide de PNIPAM dans le mélange méthanol/eau se sépare en deux phases dont une phase possède beaucoup plus de PNIPAM que l’autre. Un diagramme de phase qui montre la MLLPS pour le mélange PNIPAM/eau/méthanol a été établi à base de données expérimentales. La taille et la morphologie de gouttelettes dans la phase riche en polymère condensée dépendent de la fraction molaire de méthanol. Parce que la présence de méthanol influence la tension de surface des gouttelettes liquides, un équilibre lent de la séparation de phase pour PNIPAM/eau/méthanol système a été accéléré et une séparation de phase liquide-liquide macroscopique apparait. Afin d’étudier l’effet de groupes terminaux sur les propriétés de solution de PIPOZ, deux PIPOZs téléchéliques avec groupe perfluorodécanyle (FPIPOZ) ou groupe octadécyle (C18PIPOZ) comme extrémités de chaîne ont été synthétisés. Les valeurs de Tc des polymères téléchéliques ont beaucoup diminué par rapport à celle de PIPOZ. Des micelles stables se forment dans des solutions aqueuses de polymères téléchéliques. La micellization et la séparation de phase de ces polymères dans l’eau ont été étudiées. La séparation de phase de PIPOZs téléchéliques suit le mécanisme de MLLPS. Des différences en tailles de gouttelettes formées à l’intérieur de solutions de deux polymères ont été observées. Pour étudier profondément les différences dans le comportement d’association entre deux polymères téléchéliques, les intensités des signaux de polymères correspondants et les temps de relaxation T1, T2 ont été mesurés. Des valeurs de T2 de protons correspondants aux IPOZs sont plus hautes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals initially with a literature reference survey ,taxonomy, their incidence in selected food fishes and shellfishes, and their incidence and distribution, their survival during different types of processing, their heat survival at temperatures of 50 ,55 and 60 degree centigrade their growth initiation at different low levels of pHs(4.0 to 10) ,and their developmental resistance to various chemical agents. The trials for the study were collected from various landing centre at cochin and the retail outlets. Based on these data collections the researcher was able to obtain more knowledge of the processing technology and the survival of pathogens like salmonella and vibrio parahaemolyticus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microarray data analysis is one of data mining tool which is used to extract meaningful information hidden in biological data. One of the major focuses on microarray data analysis is the reconstruction of gene regulatory network that may be used to provide a broader understanding on the functioning of complex cellular systems. Since cancer is a genetic disease arising from the abnormal gene function, the identification of cancerous genes and the regulatory pathways they control will provide a better platform for understanding the tumor formation and development. The major focus of this thesis is to understand the regulation of genes responsible for the development of cancer, particularly colorectal cancer by analyzing the microarray expression data. In this thesis, four computational algorithms namely fuzzy logic algorithm, modified genetic algorithm, dynamic neural fuzzy network and Takagi Sugeno Kang-type recurrent neural fuzzy network are used to extract cancer specific gene regulatory network from plasma RNA dataset of colorectal cancer patients. Plasma RNA is highly attractive for cancer analysis since it requires a collection of small amount of blood and it can be obtained at any time in repetitive fashion allowing the analysis of disease progression and treatment response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The telemetry data processing operation intended for a given mission are pre-defined by an onboard telemetry configuration, mission trajectory and overall telemetry methodology have stabilized lately for ISRO vehicles. The given problem on telemetry data processing is reduced through hierarchical problem reduction whereby the sequencing of operations evolves as the control task and operations on data as the function task. The function task Input, Output and execution criteria are captured into tables which are examined by the control task and then schedules when the function task when the criteria is being met.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the salient features associated with the variation in the BODs and dissolved oxygen concentration in the Kadinamkulam Kayal based on fortnightly data from two selected stations from October1987toSeptember1988.The BODs ranged from 5.76 to 24.39 mg/l in the surface water and from 4.96 to 22.60mg!1 in the bottom waterat station-l whereas at station-2, it ranged from 0 to 3.74mg/1 in the surface water and from 0 to 3.40 mg!l in the bottom water. The dissolved oxygen concentration ranged from 0 to 0.72 mglI in the surface water and from 0 to 0.42 mg!l in the bottom waterat station-I, At station-2 it ranged from 2.69 to 6.21mg!1 in the surface waterand from 1.97 to 5.74 mg!1 in the bottom water. The pre-monsoom period showed the highest BODsof 16.68mg!I while the monsoon period showed the lowest of 0.61 mg!I. The dissolved oxygen concentration reached its peak during the monsoon period (5.52 mg/I). Long spells of anoxic condition during the post and pre-monsoon periods was a characteristic feature of the retting zone

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge discovery in databases is the non-trivial process of identifying valid, novel potentially useful and ultimately understandable patterns from data. The term Data mining refers to the process which does the exploratory analysis on the data and builds some model on the data. To infer patterns from data, data mining involves different approaches like association rule mining, classification techniques or clustering techniques. Among the many data mining techniques, clustering plays a major role, since it helps to group the related data for assessing properties and drawing conclusions. Most of the clustering algorithms act on a dataset with uniform format, since the similarity or dissimilarity between the data points is a significant factor in finding out the clusters. If a dataset consists of mixed attributes, i.e. a combination of numerical and categorical variables, a preferred approach is to convert different formats into a uniform format. The research study explores the various techniques to convert the mixed data sets to a numerical equivalent, so as to make it equipped for applying the statistical and similar algorithms. The results of clustering mixed category data after conversion to numeric data type have been demonstrated using a crime data set. The thesis also proposes an extension to the well known algorithm for handling mixed data types, to deal with data sets having only categorical data. The proposed conversion has been validated on a data set corresponding to breast cancer. Moreover, another issue with the clustering process is the visualization of output. Different geometric techniques like scatter plot, or projection plots are available, but none of the techniques display the result projecting the whole database but rather demonstrate attribute-pair wise analysis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated the relationship between higher education and the requirement of the world of work with an emphasis on the effect of problem-based learning (PBL) on graduates' competencies. The implementation of full PBL method is costly (Albanese & Mitchell, 1993; Berkson, 1993; Finucane, Shannon, & McGrath, 2009). However, the implementation of PBL in a less than curriculum-wide mode is more achievable in a broader context (Albanese, 2000). This means higher education institutions implement only a few PBL components in the curriculum. Or a teacher implements a few PBL components at the courses level. For this kind of implementation there is a need to identify PBL components and their effects on particular educational outputs (Hmelo-Silver, 2004; Newman, 2003). So far, however there has been little research about this topic. The main aims of this study were: (1) to identify each of PBL components which were manifested in the development of a valid and reliable PBL implementation questionnaire and (2) to determine the effect of each identified PBL component to specific graduates' competencies. The analysis was based on quantitative data collected in the survey of medicine graduates of Gadjah Mada University, Indonesia. A total of 225 graduates responded to the survey. The result of confirmatory factor analysis (CFA) showed that all individual constructs of PBL and graduates' competencies had acceptable GOFs (Goodness-of-fit). Additionally, the values of the factor loadings (standardize loading estimates), the AVEs (average variance extracted), CRs (construct reliability), and ASVs (average shared squared variance) showed the proof of convergent and discriminant validity. All values indicated valid and reliable measurements. The investigation of the effects of PBL showed that each PBL component had specific effects on graduates' competencies. Interpersonal competencies were affected by Student-centred learning (β = .137; p < .05) and Small group components (β = .078; p < .05). Problem as stimulus affected Leadership (β = .182; p < .01). Real-world problems affected Personal and organisational competencies (β = .140; p < .01) and Interpersonal competencies (β = .114; p < .05). Teacher as facilitator affected Leadership (β = 142; p < .05). Self-directed learning affected Field-related competencies (β = .080; p < .05). These results can help higher education institution and educator to have informed choice about the implementation of PBL components. With this information higher education institutions and educators could fulfil their educational goals and in the same time meet their limited resources. This study seeks to improve prior studies' research method in four major ways: (1) by indentifying PBL components based on theory and empirical data; (2) by using latent variables in the structural equation modelling instead of using a variable as a proxy of a construct; (3) by using CFA to validate the latent structure of the measurement, thus providing better evidence of validity; and (4) by using graduate survey data which is suitable for analysing PBL effects in the frame work of the relationship between higher education and the world of work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We contribute a quantitative and systematic model to capture etch non-uniformity in deep reactive ion etch of microelectromechanical systems (MEMS) devices. Deep reactive ion etch is commonly used in MEMS fabrication where high-aspect ratio features are to be produced in silicon. It is typical for many supposedly identical devices, perhaps of diameter 10 mm, to be etched simultaneously into one silicon wafer of diameter 150 mm. Etch non-uniformity depends on uneven distributions of ion and neutral species at the wafer level, and on local consumption of those species at the device, or die, level. An ion–neutral synergism model is constructed from data obtained from etching several layouts of differing pattern opening densities. Such a model is used to predict wafer-level variation with an r.m.s. error below 3%. This model is combined with a die-level model, which we have reported previously, on a MEMS layout. The two-level model is shown to enable prediction of both within-die and wafer-scale etch rate variation for arbitrary wafer loadings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compare correspondance análisis to the logratio approach based on compositional data. We also compare correspondance análisis and an alternative approach using Hellinger distance, for representing categorical data in a contingency table. We propose a coefficient which globally measures the similarity between these approaches. This coefficient can be decomposed into several components, one component for each principal dimension, indicating the contribution of the dimensions to the difference between the two representations. These three methods of representation can produce quite similar results. One illustrative example is given

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developments in the statistical analysis of compositional data over the last two decades have made possible a much deeper exploration of the nature of variability, and the possible processes associated with compositional data sets from many disciplines. In this paper we concentrate on geochemical data sets. First we explain how hypotheses of compositional variability may be formulated within the natural sample space, the unit simplex, including useful hypotheses of subcompositional discrimination and specific perturbational change. Then we develop through standard methodology, such as generalised likelihood ratio tests, statistical tools to allow the systematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require special construction. We comment on the use of graphical methods in compositional data analysis and on the ordination of specimens. The recent development of the concept of compositional processes is then explained together with the necessary tools for a staying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland. Finally we point out a number of unresolved problems in the statistical analysis of compositional processes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by a simplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able to generate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow defining monitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated