997 resultados para data elements


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several eco-toxicological studies have shown that insectivorous mammals, due to their feeding habits, easily accumulate high amounts of pollutants in relation to other mammal species. To assess the bio-accumulation levels of toxic metals and their in°uence on essential metals, we quantified the concentration of 19 elements (Ca, K, Fe, B, P, S, Na, Al, Zn, Ba, Rb, Sr, Cu, Mn, Hg, Cd, Mo, Cr and Pb) in bones of 105 greater white-toothed shrews (Crocidura russula) from a polluted (Ebro Delta) and a control (Medas Islands) area. Since chemical contents of a bio-indicator are mainly compositional data, conventional statistical analyses currently used in eco-toxicology can give misleading results. Therefore, to improve the interpretation of the data obtained, we used statistical techniques for compositional data analysis to define groups of metals and to evaluate the relationships between them, from an inter-population viewpoint. Hypothesis testing on the adequate balance-coordinates allow us to confirm intuition based hypothesis and some previous results. The main statistical goal was to test equal means of balance-coordinates for the two defined populations. After checking normality, one-way ANOVA or Mann-Whitney tests were carried out for the inter-group balances

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simpson's paradox, also known as amalgamation or aggregation paradox, appears when dealing with proportions. Proportions are by construction parts of a whole, which can be interpreted as compositions assuming they only carry relative information. The Aitchison inner product space structure of the simplex, the sample space of compositions, explains the appearance of the paradox, given that amalgamation is a nonlinear operation within that structure. Here we propose to use balances, which are specific elements of this structure, to analyse situations where the paradox might appear. With the proposed approach we obtain that the centre of the tables analysed is a natural way to compare them, which avoids by construction the possibility of a paradox. Key words: Aitchison geometry, geometric mean, orthogonal projection

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geochemical data that is derived from the whole or partial analysis of various geologic materials represent a composition of mineralogies or solute species. Minerals are composed of structured relationships between cations and anions which, through atomic and molecular forces, keep the elements bound in specific configurations. The chemical compositions of minerals have specific relationships that are governed by these molecular controls. In the case of olivine, there is a well-defined relationship between Mn-Fe-Mg with Si. Balances between the principal elements defining olivine composition and other significant constituents in the composition (Al, Ti) have been defined, resulting in a near-linear relationship between the logarithmic relative proportion of Si versus (MgMnFe) and Mg versus (MnFe), which is typically described but poorly illustrated in the simplex. The present contribution corresponds to ongoing research, which attempts to relate stoichiometry and geochemical data using compositional geometry. We describe here the approach by which stoichiometric relationships based on mineralogical constraints can be accounted for in the space of simplicial coordinates using olivines as an example. Further examples for other mineral types (plagioclases and more complex minerals such as clays) are needed. Issues that remain to be dealt with include the reduction of a bulk chemical composition of a rock comprised of several minerals from which appropriate balances can be used to describe the composition in a realistic mineralogical framework. The overall objective of our research is to answer the question: In the cases where the mineralogy is unknown, are there suitable proxies that can be substituted? Kew words: Aitchison geometry, balances, mineral composition, oxides

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr) transformation to obtain the random vector y of dimension D. The factor model is then y = Λf + e (1) with the factors f of dimension k < D, the error term e, and the loadings matrix Λ. Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysis model (1) can be written as Cov(y) = ΛΛT + ψ (2) where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as the loadings matrix Λ are estimated from an estimation of Cov(y). Given observed clr transformed data Y as realizations of the random vector y. Outliers or deviations from the idealized model assumptions of factor analysis can severely effect the parameter estimation. As a way out, robust estimation of the covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), see Pison et al. (2003). Well known robust covariance estimators with good statistical properties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), rely on a full-rank data matrix Y which is not the case for clr transformed data (see, e.g., Aitchison, 1986). The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves this singularity problem. The data matrix Y is transformed to a matrix Z by using an orthonormal basis of lower dimension. Using the ilr transformed data, a robust covariance matrix C(Z) can be estimated. The result can be back-transformed to the clr space by C(Y ) = V C(Z)V T where the matrix V with orthonormal columns comes from the relation between the clr and the ilr transformation. Now the parameters in the model (2) can be estimated (Basilevsky, 1994) and the results have a direct interpretation since the links to the original variables are still preserved. The above procedure will be applied to data from geochemistry. Our special interest is on comparing the results with those of Reimann et al. (2002) for the Kola project data

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this theme you will work through a series of texts and activities and reflect on your view of research and the process of analysis of data and information. Most activities are supported by textual or audio material and are there to stimulate your thinking in a given area. The purpose of this theme is to help you gain a general overview of the main approaches to research design. Although the theme comprises two main sections, one on quantitative research and the other on qualitative research, this is purely to guide your study. The two approaches may be viewed as being part of a continuum with many research studies now incorporating elements of both styles. Eventually you will need to choose a research approach or methodology that will be practical, relevant, appropriate, ethical, of good quality and effective for the research idea or question that you have in mind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Data Protection Regulation proposed by the European Commission contains important elements to facilitate and secure personal data flows within the Single Market. A harmonised level of protection of individual data is an important objective and all stakeholders have generally welcomed this basic principle. However, when putting the regulation proposal in the complex context in which it is to be implemented, some important issues are revealed. The proposal dictates how data is to be used, regardless of the operational context. It is generally thought to have been influenced by concerns over social networking. This approach implies protection of data rather than protection of privacy and can hardly lead to more flexible instruments for global data flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synoptic climatology relates the atmospheric circulation with the surface environment. The aim of this study is to examine the variability of the surface meteorological patterns, which are developing under different synoptic scale categories over a suburban area with complex topography. Multivariate Data Analysis techniques were performed to a data set with surface meteorological elements. Three principal components related to the thermodynamic status of the surface environment and the two components of the wind speed were found. The variability of the surface flows was related with atmospheric circulation categories by applying Correspondence Analysis. Similar surface thermodynamic fields develop under cyclonic categories, which are contrasted with the anti-cyclonic category. A strong, steady wind flow characterized by high shear values develops under the cyclonic Closed Low and the anticyclonic H–L categories, in contrast to the variable weak flow under the anticyclonic Open Anticyclone category.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Britain, substantial cuts in police budgets alongside controversial handling of incidents such as politically sensitive enquiries, public disorder and relations with the media have recently triggered much debate about public knowledge and trust in the police. To date, however, little academic research has investigated how knowledge of police performance impacts citizens’ trust. We address this long-standing lacuna by exploring citizens’ trust before and after exposure to real performance data in the context of a British police force. The results reveal that being informed of performance data affects citizens’ trust significantly. Furthermore, direction and degree of change in trust are related to variations across the different elements of the reported performance criteria. Interestingly, the volatility of citizens’ trust is related to initial performance perceptions (such that citizens with low initial perceptions of police performance react more significantly to evidence of both good and bad performance than citizens with high initial perceptions), and citizens’ intentions to support the police do not always correlate with their cognitive and affective trust towards the police. In discussing our findings, we explore the implications of how being transparent with performance data can both hinder and be helpful in developing citizens’ trust towards a public organisation such as the police. From our study, we pose a number of ethical challenges that practitioners face when deciding what data to highlight, to whom, and for what purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trace element measurements in PM10–2.5, PM2.5–1.0 and PM1.0–0.3 aerosol were performed with 2 h time resolution at kerbside, urban background and rural sites during the ClearfLo winter 2012 campaign in London. The environment-dependent variability of emissions was characterized using the Multilinear Engine implementation of the positive matrix factorization model, conducted on data sets comprising all three sites but segregated by size. Combining the sites enabled separation of sources with high temporal covariance but significant spatial variability. Separation of sizes improved source resolution by preventing sources occurring in only a single size fraction from having too small a contribution for the model to resolve. Anchor profiles were retrieved internally by analysing data subsets, and these profiles were used in the analyses of the complete data sets of all sites for enhanced source apportionment. A total of nine different factors were resolved (notable elements in brackets): in PM10–2.5, brake wear (Cu, Zr, Sb, Ba), other traffic-related (Fe), resuspended dust (Si, Ca), sea/road salt (Cl), aged sea salt (Na, Mg) and industrial (Cr, Ni); in PM2.5–1.0, brake wear, other traffic-related, resuspended dust, sea/road salt, aged sea salt and S-rich (S); and in PM1.0–0.3, traffic-related (Fe, Cu, Zr, Sb, Ba), resuspended dust, sea/road salt, aged sea salt, reacted Cl (Cl), S-rich and solid fuel (K, Pb). Human activities enhance the kerb-to-rural concentration gradients of coarse aged sea salt, typically considered to have a natural source, by 1.7–2.2. These site-dependent concentration differences reflect the effect of local resuspension processes in London. The anthropogenically influenced factors traffic (brake wear and other traffic-related processes), dust and sea/road salt provide further kerb-to-rural concentration enhancements by direct source emissions by a factor of 3.5–12.7. The traffic and dust factors are mainly emitted in PM10–2.5 and show strong diurnal variations with concentrations up to 4 times higher during rush hour than during night-time. Regionally influenced S-rich and solid fuel factors, occurring primarily in PM1.0–0.3, have negligible resuspension influences, and concentrations are similar throughout the day and across the regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Understanding the behaviour and ecology of large carnivores is becoming increasingly important as the list of endangered species grows, with felids such as Panthera leo in some locations heading dangerously close to extinction in the wild. In order to have more reliable and effective tools to understand animal behaviour, movement and diet, we need to develop novel, integrated approaches and effective techniques to capture a detailed profile of animal foraging and movement patterns. 2. Ecological studies have shown considerable interest in using stable isotope methods, both to investigate the nature of animal feeding habits, and to map their geographical location. However, recent work has suggested that stable isotope analyses of felid fur and bone is very complex and does not correlate directly with the isotopic composition of precipitation (and hence geographical location). 3. We present new data that suggest these previous findings may be atypical, and demonstrate that isotope analyses of Felidae are suitable for both evaluating dietary inputs and establishing geo-location as they have strong environmental referents to both food and water. These data provide new evidence of an important methodology that can be applied to the family Felidae for future research in ecology, conservation, wildlife forensics and archaeological science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Type XVIII collagen is a component of basement membranes, and expressed prominently in the eye, blood vessels, liver, and the central nervous system. Homozygous mutations in COL18A1 lead to Knobloch Syndrome, characterized by ocular defects and occipital encephalocele. However, relatively little has been described on the role of type XVIII collagen in development, and nothing is known about the regulation of its tissue-specific expression pattern. We have used zebrafish transgenesis to identify and characterize cis-regulatory sequences controlling expression of the human gene. Candidate enhancers were selected from non-coding sequence associated with COL18A1 based on sequence conservation among mammals. Although these displayed no overt conservation with orthologous zebrafish sequences, four regions nonetheless acted as tissue-specific transcriptional enhancers in the zebrafish embryo, and together recapitulated the major aspects of col18a1 expression. Additional post-hoc computational analysis on positive enhancer sequences revealed alignments between mammalian and teleost sequences, which we hypothesize predict the corresponding zebrafish enhancers; for one of these, we demonstrate functional overlap with the orthologous human enhancer sequence. Our results provide important insight into the biological function and regulation of COL18A1, and point to additional sequences that may contribute to complex diseases involving COL18A1. More generally, we show that combining functional data with targeted analyses for phylogenetic conservation can reveal conserved cis-regulatory elements in the large number of cases where computational alignment alone falls short. (C) 2009 Elsevier Inc. All rights reserved.