46 resultados para principal component analysis (PCA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Psychometric analysis of the AF5 multidimensional scale of self-concept in a sample of adolescents and adults in Catalonia. The aim of this study is to carry out a psychometric study of the AF5 scale in a sample of 4.825 Catalan subjects from 11 to 63 years-old. They are students from secondary compulsory education (ESO), from high school, middle-level vocational training (CFGM) and from the university. Using a principal component analysis (PCA) the theoretical validity of the components is established and the reliability of the instrument is also analyzed. Differential analyses are performed by gender and normative group using a 2 6 factorial design. The normative group variable includes the different levels classifi ed into 6 sub-groups: university, post-compulsory secondary education (high school and CFGM), 4th of ESO, 3rd of ESO, 2nd of ESO and 1st of ESO. The results indicate that the reliability of the Catalan version of the scale is similar to the original scale. The factorial structure also fi ts with the original model established beforehand. Signifi cant differences by normative group in the four components of self-concept explored (social, family, academic/occupational and physical) are observed. By gender, signifi cant differences appear in the component of physical self-concept, academic and social but not in the family component

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the application of a PCA analysis on categorical data prior to diagnose a patients data set using a Case-Based Reasoning (CBR) system. The particularity is that the standard PCA techniques are designed to deal with numerical attributes, but our medical data set contains many categorical data and alternative methods as RS-PCA are required. Thus, we propose to hybridize RS-PCA (Regular Simplex PCA) and a simple CBR. Results show how the hybrid system produces similar results when diagnosing a medical data set, that the ones obtained when using the original attributes. These results are quite promising since they allow to diagnose with less computation effort and memory storage

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the application of a PCA analysis on categorical data prior to diagnose a patients data set using a Case-Based Reasoning (CBR) system. The particularity is that the standard PCA techniques are designed to deal with numerical attributes, but our medical data set contains many categorical data and alternative methods as RS-PCA are required. Thus, we propose to hybridize RS-PCA (Regular Simplex PCA) and a simple CBR. Results show how the hybrid system produces similar results when diagnosing a medical data set, that the ones obtained when using the original attributes. These results are quite promising since they allow to diagnose with less computation effort and memory storage

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments

Relevância:

100.00% 100.00%

Publicador:

Resumo:

First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by asimplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able togenerate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow definingmonitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The information provided by the alignment-independent GRid Independent Descriptors (GRIND) can be condensed by the application of principal component analysis, obtaining a small number of principal properties (GRIND-PP), which is more suitable for describing molecular similarity. The objective of the present study is to optimize diverse parameters involved in the obtention of the GRIND-PP and validate their suitability for applications, requiring a biologically relevant description of the molecular similarity. With this aim, GRIND-PP computed with a collection of diverse settings were used to carry out ligand-based virtual screening (LBVS) on standard conditions. The quality of the results obtained was remarkable and comparable with other LBVS methods, and their detailed statistical analysis allowed to identify the method settings more determinant for the quality of the results and their optimum. Remarkably, some of these optimum settings differ significantly from those used in previously published applications, revealing their unexplored potential. Their applicability in large compound database was also explored by comparing the equivalence of the results obtained using either computed or projected principal properties. In general, the results of the study confirm the suitability of the GRIND-PP for practical applications and provide useful hints about how they should be computed for obtaining optimum results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Peach fruit undergoes a rapid softening process that involves a number of metabolic changes. Storing fruit at low temperatures has been widely used to extend its postharvest life. However, this leads to undesired changes, such as mealiness and browning, which affect the quality of the fruit. In this study, a 2-D DIGE approach was designed to screen for differentially accumulated proteins in peach fruit during normal softening as well as under conditions that led to fruit chilling injury. Results:The analysis allowed us to identify 43 spots -representing about 18% of the total number analyzed- that show statistically significant changes. Thirty-nine of the proteins could be identified by mass spectrometry. Some of the proteins that changed during postharvest had been related to peach fruit ripening and cold stress in the past. However, we identified other proteins that had not been linked to these processes. A graphical display of the relationship between the differentially accumulated proteins was obtained using pairwise average-linkage cluster analysis and principal component analysis. Proteins such as endopolygalacturonase, catalase, NADP-dependent isocitrate dehydrogenase, pectin methylesterase and dehydrins were found to be very important for distinguishing between healthy and chill injured fruit. A categorization of the differentially accumulated proteins was performed using Gene Ontology annotation. The results showed that the 'response to stress', 'cellular homeostasis', 'metabolism of carbohydrates' and 'amino acid metabolism' biological processes were affected the most during the postharvest. Conclusions: Using a comparative proteomic approach with 2-D DIGE allowed us to identify proteins that showed stage-specific changes in their accumulation pattern. Several proteins that are related to response to stress, cellular homeostasis, cellular component organization and carbohydrate metabolism were detected as being differentially accumulated. Finally, a significant proportion of the proteins identified had not been associated with softening, cold storage or chilling injury-altered fruit before; thus, comparative proteomics has proven to be a valuable tool for understanding fruit softening and postharvest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the joint visualization of two matrices which have common rowsand columns, for example multivariate data observed at two time pointsor split accord-ing to a dichotomous variable. Methods of interest includeprincipal components analysis for interval-scaled data, or correspondenceanalysis for frequency data or ratio-scaled variables on commensuratescales. A simple result in matrix algebra shows that by setting up thematrices in a particular block format, matrix sum and difference componentscan be visualized. The case when we have more than two matrices is alsodiscussed and the methodology is applied to data from the InternationalSocial Survey Program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dual scaling of a subjects-by-objects table of dominance data (preferences,paired comparisons and successive categories data) has been contrasted with correspondence analysis, as if the two techniques were somehow different. In this note we show that dual scaling of dominance data is equivalent to the correspondence analysis of a table which is doubled with respect to subjects. We also show that the results of both methods can be recovered from a principal components analysis of the undoubled dominance table which is centred with respect to subject means.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study discusses retention criteria for principal components analysis (PCA) applied to Likert scale items typical in psychological questionnaires. The main aim is to recommend applied researchers to restrain from relying only on the eigenvalue-than-one criterion; alternative procedures are suggested for adjusting for sampling error. An additional objective is to add evidence on the consequences of applying this rule when PCA is used with discrete variables. The experimental conditions were studied by means of Monte Carlo sampling including several sample sizes, different number of variables and answer alternatives, and four non-normal distributions. The results suggest that even when all the items and thus the underlying dimensions are independent, eigenvalues greater than one are frequent and they can explain up to 80% of the variance in data, meeting the empirical criterion. The consequences of using Kaiser"s rule are illustrated with a clinical psychology example. The size of the eigenvalues resulted to be a function of the sample size and the number of variables, which is also the case for parallel analysis as previous research shows. To enhance the application of alternative criteria, an R package was developed for deciding the number of principal components to retain by means of confidence intervals constructed about the eigenvalues corresponding to lack of relationship between discrete variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Functional Data Analysis (FDA) deals with samples where a whole function is observedfor each individual. A particular case of FDA is when the observed functions are densityfunctions, that are also an example of infinite dimensional compositional data. In thiswork we compare several methods for dimensionality reduction for this particular typeof data: functional principal components analysis (PCA) with or without a previousdata transformation and multidimensional scaling (MDS) for diferent inter-densitiesdistances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (householdsincome distributions)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three multivariate statistical tools (principal component analysis, factor analysis, analysis discriminant) have been tested to characterize and model the sags registered in distribution substations. Those models use several features to represent the magnitude, duration and unbalanced grade of sags. They have been obtained from voltage and current waveforms. The techniques are tested and compared using 69 registers of sags. The advantages and drawbacks of each technique are listed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A statistical method for classification of sags their origin downstream or upstream from the recording point is proposed in this work. The goal is to obtain a statistical model using the sag waveforms useful to characterise one type of sags and to discriminate them from the other type. This model is built on the basis of multi-way principal component analysis an later used to project the available registers in a new space with lower dimension. Thus, a case base of diagnosed sags is built in the projection space. Finally classification is done by comparing new sags against the existing in the case base. Similarity is defined in the projection space using a combination of distances to recover the nearest neighbours to the new sag. Finally the method assigns the origin of the new sag according to the origin of their neighbours

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this paper belongs to the power quality knowledge area and deals with the voltage sags in power transmission and distribution systems. Propagating throughout the power network, voltage sags can cause plenty of problems for domestic and industrial loads that can financially cost a lot. To impose penalties to responsible party and to improve monitoring and mitigation strategies, sags must be located in the power network. With such a worthwhile objective, this paper comes up with a new method for associating a sag waveform with its origin in transmission and distribution networks. It solves this problem through developing hybrid methods which hire multiway principal component analysis (MPCA) as a dimension reduction tool. MPCA reexpresses sag waveforms in a new subspace just in a few scores. We train some well-known classifiers with these scores and exploit them for classification of future sags. The capabilities of the proposed method for dimension reduction and classification are examined using the real data gathered from three substations in Catalonia, Spain. The obtained classification rates certify the goodness and powerfulness of the developed hybrid methods as brand-new tools for sag classification