902 resultados para principal component analysis (PCA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mest., Qualidade em Análises, Faculdade de Ciências e Tecnologia, Univ. do Algarve, 2013

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims to optimize the water quality monitoring of a polluted watercourse (Leça River, Portugal) through the principal component analysis (PCA) and cluster analysis (CA). These statistical methodologies were applied to physicochemical, bacteriological and ecotoxicological data (with the marine bacterium Vibrio fischeri and the green alga Chlorella vulgaris) obtained with the analysis of water samples monthly collected at seven monitoring sites and during five campaigns (February, May, June, August, and September 2006). The results of some variables were assigned to water quality classes according to national guidelines. Chemical and bacteriological quality data led to classify Leça River water quality as “bad” or “very bad”. PCA and CA identified monitoring sites with similar pollution pattern, giving to site 1 (located in the upstream stretch of the river) a distinct feature from all other sampling sites downstream. Ecotoxicity results corroborated this classification thus revealing differences in space and time. The present study includes not only physical, chemical and bacteriological but also ecotoxicological parameters, which broadens new perspectives in river water characterization. Moreover, the application of PCA and CA is very useful to optimize water quality monitoring networks, defining the minimum number of sites and their location. Thus, these tools can support appropriate management decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation to obtain the degree of Master in Electrical and Computer Engineering

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a method for analyzing scoliosis trunk deformities using Independent Component Analysis (ICA). Our hypothesis is that ICA can capture the scoliosis deformities visible on the trunk. Unlike Principal Component Analysis (PCA), ICA gives local shape variation and assumes that the data distribution is not normal. 3D torso images of 56 subjects including 28 patients with adolescent idiopathic scoliosis and 28 healthy subjects are analyzed using ICA. First, we remark that the independent components capture the local scoliosis deformities as the shoulder variation, the scapula asymmetry and the waist deformation. Second, we note that the different scoliosis curve types are characterized by different combinations of specific independent components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At CoDaWork'03 we presented work on the analysis of archaeological glass composi- tional data. Such data typically consist of geochemical compositions involving 10-12 variables and approximates completely compositional data if the main component, sil- ica, is included. We suggested that what has been termed `crude' principal component analysis (PCA) of standardized data often identi ed interpretable pattern in the data more readily than analyses based on log-ratio transformed data (LRA). The funda- mental problem is that, in LRA, minor oxides with high relative variation, that may not be structure carrying, can dominate an analysis and obscure pattern associated with variables present at higher absolute levels. We investigate this further using sub- compositional data relating to archaeological glasses found on Israeli sites. A simple model for glass-making is that it is based on a `recipe' consisting of two `ingredients', sand and a source of soda. Our analysis focuses on the sub-composition of components associated with the sand source. A `crude' PCA of standardized data shows two clear compositional groups that can be interpreted in terms of di erent recipes being used at di erent periods, re ected in absolute di erences in the composition. LRA analysis can be undertaken either by normalizing the data or de ning a `residual'. In either case, after some `tuning', these groups are recovered. The results from the normalized LRA are di erently interpreted as showing that the source of sand used to make the glass di ered. These results are complementary. One relates to the recipe used. The other relates to the composition (and presumed sources) of one of the ingredients. It seems to be axiomatic in some expositions of LRA that statistical analysis of compositional data should focus on relative variation via the use of ratios. Our analysis suggests that absolute di erences can also be informative

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to obtain a high-resolution Pleistocene stratigraphy, eleven continuously cored boreholes, 100 to 220m deep were drilled in the northern part of the Po Plain by Regione Lombardia in the last five years. Quantitative provenance analysis (QPA, Weltje and von Eynatten, 2004) of Pleistocene sands was carried out by using multivariate statistical analysis (principal component analysis, PCA, and similarity analysis) on an integrated data set, including high-resolution bulk petrography and heavy-mineral analyses on Pleistocene sands and of 250 major and minor modern rivers draining the southern flank of the Alps from West to East (Garzanti et al, 2004; 2006). Prior to the onset of major Alpine glaciations, metamorphic and quartzofeldspathic detritus from the Western and Central Alps was carried from the axial belt to the Po basin longitudinally parallel to the SouthAlpine belt by a trunk river (Vezzoli and Garzanti, 2008). This scenario rapidly changed during the marine isotope stage 22 (0.87 Ma), with the onset of the first major Pleistocene glaciation in the Alps (Muttoni et al, 2003). PCA and similarity analysis from core samples show that the longitudinal trunk river at this time was shifted southward by the rapid southward and westward progradation of transverse alluvial river systems fed from the Central and Southern Alps. Sediments were transported southward by braided river systems as well as glacial sediments transported by Alpine valley glaciers invaded the alluvial plain. Kew words: Detrital modes; Modern sands; Provenance; Principal Components Analysis; Similarity, Canberra Distance; palaeodrainage

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In vitro batch culture fermentations were conducted with grape seed polyphenols and human faecal microbiota, in order to monitor both changes in precursor flavan-3-ols and the formation of microbial-derived metabolites. By the application of UPLC-DAD-ESI-TQ MS, monomers, and dimeric and trimeric procyanidins were shown to be degraded during the first 10 h of fermentation, with notable inter-individual differences being observed between fermentations. This period (10 h) also coincided with the maximum formation of intermediate metabolites, such as 5-(3′,4′-dihydroxyphenyl)-γ-valerolactone and 4-hydroxy-5-(3′,4′-dihydroxyphenyl)-valeric acid, and of several phenolic acids, including 3-(3,4-dihydroxyphenyl)-propionic acid, 3,4-dihydroxyphenylacetic acid, 4-hydroxymandelic acid, and gallic acid (5–10 h maximum formation). Later phases of the incubations (10–48 h) were characterised by the appearance of mono- and non-hydroxylated forms of previous metabolites by dehydroxylation reactions. Of particular interest was the detection of γ-valerolactone, which was seen for the first time as a metabolite from the microbial catabolism of flavan-3-ols. Changes registered during fermentation were finally summarised by a principal component analysis (PCA). Results revealed that 5-(3′,4′-dihydroxyphenyl)-γ-valerolactone was a key metabolite in explaining inter-individual differences and delineating the rate and extent of the microbial catabolism of flavan-3-ols, which could finally affect absorption and bioactivity of these compounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. New method: We propose a complete pipeline for the cluster analysis of ERP data. To increase the signalto-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA)to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). Results: After validating the pipeline on simulated data, we tested it on data from two experiments – a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principal Topic Internationalisation strategies are important for company expansion because New Zealand, with its four million people, has such a small market. There may or may not exist ”agency costs” in the use of Outside Directors. Ownership patterns may also influence Internationalisation.

Methodology/Key Propositions This study uses Principal Component Analysis both in a grounded theory approach and in a confirmatory approach.

Results and Implications We find evidence that in New Zealand, contrary to some previous research elsewhere, outside Directors actually have less influence on Internationalisation than Inside Directors. Private ownership also seems to have a greater association with Internationalisation than other ownership types. A highly reliable sample of 1989 New Zealand company directors showed that such factors as gender, age and location and even industry sector were irrelevant. Two factors were important in explaining whether a company goes off-shore. These are the size and magnitude of the company as well as the ownership type and role of the CEO. In essence, this study validates New Zealand’s present strategy of ”picking winners”, that is, selecting firms based upon factor components. This study adds strength to that strategy because it identifies the concrete components that should be taken into account when picking companies for special treatment, e.g. export promotion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-resolved extinction spectra assisted with two-dimensional correlation spectroscopy (2DCOS) analysis and principal component analysis (PCA) were employed to investigate the interaction between bovine serum albumin (BSA) and metal nanoparticles (NPs). A series of localized surface plasmon resonance (LSPR) spectra of metal NPs were measured just after a small amount of BSA was added into metal colloids. Through 2DCOS analysis, remarkable changes in the intensities of the LSPR were observed. The interaction process was totally divided into three periods according to the PCA. Transmission electron microscopy, dynamic light scattering, and ζ-potential measurements were also employed to characterize the interaction between BSA and metal NPs. The addition of BSA brings silver NPs to aggregate through the electrostatic interaction between them, but it has less effect on gold NPs. In a gold and silver mixed system, gold NPs can affect the interaction of silver NPs and BSA, leading it to weaken. The combination of 2DCOS analysis and LSPR spectroscopy is powerful for exploring the LSPR spectra of the metal NP involved systems. This combined technique holds great potential in LSPR sensing through analysis of slight, slim spectral changes of metal colloids

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new multivariate process capability index (MPCI) which is based on the principal component analysis (PCA) and is dependent on a parameter (Formula presented.) which can take on any real number. This MPCI generalises some existing multivariate indices based on PCA proposed by several authors when (Formula presented.) or (Formula presented.). One of the key contributions of this paper is to show that there is a direct correspondence between this MPCI and process yield for a unique value of (Formula presented.). This result is used to establish a relationship between the capability status of the process and to show that under some mild conditions, the estimators of this MPCI is consistent and converge to a normal distribution. This is then applied to perform tests of statistical hypotheses and in determining sample sizes. Several numerical examples are presented with the objective of illustrating the procedures and demonstrating how they can be applied to determine the viability and capacity of different manufacturing processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principal Component Analysis (PCA) was used to determine the association between dietary patterns and cognitive function and to examine how classification systems based on food groups and food items affect levels of association between diet and cognitive function. The present study focuses on the older segment of the Australian Diabetes, Obesity and Lifestyle Study (AusDiab) sample (age 60+) that completed the food frequency questionnaire at Wave 1 (1999/2000) and the mini-mental state examination and tests of memory, verbal ability and processing speed at Wave 3 (2012). Three methods were used in order to classify these foods before applying PCA. In the first instance, the 101 individual food items asked about in the questionnaire were used (no categorisation). In the second and third instances, foods were combined and reduced to 32 and 20 food groups, respectively, based on nutrient content and culinary usage—a method employed in several other published studies for PCA. Logistic regression analysis and generalized linear modelling was used to analyse the relationship between PCA-derived dietary patterns and cognitive outcome. Broader food group classifications resulted in a greater proportion of food use variance in the sample being explained (use of 101 individual foods explained 23.22% of total food use, while use of 32 and 20 food groups explained 29.74% and 30.74% of total variance in food use in the sample, respectively). Three dietary patterns were found to be associated with decreased odds of cognitive impairment (CI). Dietary patterns derived from 101 individual food items showed that for every one unit increase in ((Fruit and Vegetable Pattern: p = 0.030, OR 1.061, confidence interval: 1.006–1.118); (Fish, Legumes and Vegetable Pattern: p = 0.040, OR 1.032, confidence interval: 1.001–1.064); (Dairy, Cereal and Eggs Pattern: p = 0.003, OR 1.020, confidence interval: 1.007–1.033)), the odds of cognitive impairment decreased. Different results were observed when the effect of dietary patterns on memory, processing speed and vocabulary were examined. Complex patterns of associations between dietary factors and cognition were evident, with the most consistent finding being the protective effects of high vegetable and plant-based food item consumption and negative effects of ‘Western’ patterns on cognition. Further long-term studies and investigation of the best methods for dietary measurement are needed to better understand diet-disease relationships in this age group.