898 resultados para PRINCIPAL COMPONENT ANALYSIS
Resumo:
As one of the newest members in the field of articial immune systems (AIS), the Dendritic Cell Algorithm (DCA) is based on behavioural models of natural dendritic cells (DCs). Unlike other AIS, the DCA does not rely on training data, instead domain or expert knowledge is required to predetermine the mapping between input signals from a particular instance to the three categories used by the DCA. This data preprocessing phase has received the criticism of having manually over-fitted the data to the algorithm, which is undesirable. Therefore, in this paper we have attempted to ascertain if it is possible to use principal component analysis (PCA) techniques to automatically categorise input data while still generating useful and accurate classication results. The integrated system is tested with a biometrics dataset for the stress recognition of automobile drivers. The experimental results have shown the application of PCA to the DCA for the purpose of automated data preprocessing is successful.
Resumo:
Vigna unguiculata (L.) Walp (cowpea) is a food crop with high nutritional value that is cultivated throughout tropical and subtropical regions of the world. The main constraint on high productivity of cowpea is water deficit, caused by the long periods of drought that occur in these regions. The aim of the present study was to select elite cowpea genotypes with enhanced drought tolerance, by applying principal component analysis to 219 first-cycle progenies obtained in a recurrent selection program. The experimental design comprised a simple 15 x 15 lattice with 450 plots, each of two rows of 10 plants. Plants were grown under water-deficit conditions by applying a water depth of 205 mm representing one-half of that required by cowpea. Variables assessed were flowering, maturation, pod length, number and mass of beans/pod, mass of 100 beans, and productivity/plot. Ten elite cowpea genotypes were selected, in which principal components 1 and 2 encompassed variables related to yield (pod length, beans/pod, and productivity/plot) and life precocity (flowering and maturation), respectively.
Resumo:
Agricultural management with chemicals may contaminate the soil with heavy metals. The objective of this study was to apply Principal Component Analysis and geoprocessing techniques to identify the origin of the metals Cu, Fe, Mn, Zn, Ni, Pb, Cr and Cd as potential contaminants of agricultural soils. The study was developed in an area of vineyard cultivation in the State of São Paulo, Brazil. Soil samples were collected and GPS located under different uses and coverings. The metal concentrations in the soils were determined using the DTPA method. The Cu and Zn content was considered high in most of the samples, and was larger in the areas cultivated with vineyards that had been under the application of fungicides for several decades. The concentrations of Cu and Zn were correlated. The geoprocessing techniques and the Principal Component Analysis confirmed the enrichment of the soil with Cu and Zn because of the use and management of the vineyards with chemicals in the preceding decades.
Resumo:
OBJECTIVE(S): An individual's risk of developing cardiovascular disease (CVD) is influenced by genetic factors. This study focussed on mapping genetic loci for CVD-risk traits in a unique population isolate derived from Norfolk Island. METHODS: This investigation focussed on 377 individuals descended from the population founders. Principal component analysis was used to extract orthogonal components from 11 cardiovascular risk traits. Multipoint variance component methods were used to assess genome-wide linkage using SOLAR to the derived factors. A total of 285 of the 377 related individuals were informative for linkage analysis. RESULTS: A total of 4 principal components accounting for 83% of the total variance were derived. Principal component 1 was loaded with body size indicators; principal component 2 with body size, cholesterol and triglyceride levels; principal component 3 with the blood pressures; and principal component 4 with LDL-cholesterol and total cholesterol levels. Suggestive evidence of linkage for principal component 2 (h(2) = 0.35) was observed on chromosome 5q35 (LOD = 1.85; p = 0.0008). While peak regions on chromosome 10p11.2 (LOD = 1.27; p = 0.005) and 12q13 (LOD = 1.63; p = 0.003) were observed to segregate with principal components 1 (h(2) = 0.33) and 4 (h(2) = 0.42), respectively. CONCLUSION(S): This study investigated a number of CVD risk traits in a unique isolated population. Findings support the clustering of CVD risk traits and provide interesting evidence of a region on chromosome 5q35 segregating with weight, waist circumference, HDL-c and total triglyceride levels.
Resumo:
In this study, we explore motivation in collocated and virtual project teams. The literature on motivation in a project set.,ting reveals that motivation is closely linked to team performance. Based on this literature, we propose a set., of variables related to the three dimensions of ‘Nature of work’, ‘Rewards’, and ‘Communication’. Thirteen original variables in a sample size of 66 collocated and 66 virtual respondents are investigated using one tail t test and principal component analysis. We find that there are minimal differences between the two groups with respect to the above mentioned three dimensions. (p= .06; t=1.71). Further, a principal component analysis of the combined sample of collocated and virtual project environments reveals two factors- ‘Internal Motivating Factor’ related to work and work environment, and ‘External Motivating Factor’ related to the financial and non-financial rewards that explain 59.8% of the variance and comprehensively characterize motivation in collocated and virtual project environments. A ‘sense check’ of our interpretation of the results shows conformity with the theory and existing practice of project organization
Resumo:
The main purpose of this article is to gain an insight into the relationships between variables describing the environmental conditions of the Far Northern section of the Great Barrier Reef, Australia. Several of the variables describing these conditions had different measurement levels and often they had non-linear relationships. Using non-linear principal component analysis, it was possible to acquire an insight into these relationships. Furthermore, three geographical areas with unique environmental characteristics could be identified.
Resumo:
Gene microarray technology is highly effective in screening for differential gene expression and has hence become a popular tool in the molecular investigation of cancer. When applied to tumours, molecular characteristics may be correlated with clinical features such as response to chemotherapy. Exploitation of the huge amount of data generated by microarrays is difficult, however, and constitutes a major challenge in the advancement of this methodology. Independent component analysis (ICA), a modern statistical method, allows us to better understand data in such complex and noisy measurement environments. The technique has the potential to significantly increase the quality of the resulting data and improve the biological validity of subsequent analysis. We performed microarray experiments on 31 postmenopausal endometrial biopsies, comprising 11 benign and 20 malignant samples. We compared ICA to the established methods of principal component analysis (PCA), Cyber-T, and SAM. We show that ICA generated patterns that clearly characterized the malignant samples studied, in contrast to PCA. Moreover, ICA improved the biological validity of the genes identified as differentially expressed in endometrial carcinoma, compared to those found by Cyber-T and SAM. In particular, several genes involved in lipid metabolism that are differentially expressed in endometrial carcinoma were only found using this method. This report highlights the potential of ICA in the analysis of microarray data.
Resumo:
A central question in Neuroscience is that of how the nervous system generates the spatiotemporal commands needed to realize complex gestures, such as handwriting. A key postulate is that the central nervous system (CNS) builds up complex movements from a set of simpler motor primitives or control modules. In this study we examined the control modules underlying the generation of muscle activations when performing different types of movement: discrete, point-to-point movements in eight different directions and continuous figure-eight movements in both the normal, upright orientation and rotated 90 degrees. To test for the effects of biomechanical constraints, movements were performed in the frontal-parallel or sagittal planes, corresponding to two different nominal flexion/abduction postures of the shoulder. In all cases we measured limb kinematics and surface electromyographic activity (EMB) signals for seven different muscles acting around the shoulder. We first performed principal component analysis (PCA) of the EMG signals on a movement-by-movement basis. We found a surprisingly consistent pattern of muscle groupings across movement types and movement planes, although we could detect systematic differences between the PCs derived from movements performed in each sholder posture and between the principal components associated with the different orientations of the figure. Unexpectedly we found no systematic differences between the figute eights and the point-to-point movements. The first three principal components could be associated with a general co-contraction of all seven muscles plus two patterns of reciprocal activatoin. From these results, we surmise that both "discrete-rhythmic movements" such as the figure eight, and discrete point-to-point movement may be constructed from three different fundamental modules, one regulating the impedance of the limb over the time span of the movement and two others operating to generate movement, one aligned with the vertical and the other aligned with the horizontal.
Resumo:
Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved. Classical methods such as Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are ubiquitous in statistics. However, these techniques are only able to reveal linear re-lationships in data. Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale. In a separate strand of recent research, randomized methods have been proposed to construct features that help reveal nonlinear patterns in data. For basic tasks such as regression or classification, random features exhibit little or no loss in performance, while achieving drastic savings in computational requirements. In this paper we leverage randomness to design scalable new variants of nonlinear PCA and CCA; our ideas extend to key multivariate analysis tools such as spectral clustering or LDA. We demonstrate our algorithms through experiments on real- world data, on which we compare against the state-of-the-art. A simple R implementation of the presented algorithms is provided.
Resumo:
This is the first paper that introduces a nonlinearity test for principal component models. The methodology involves the division of the data space into disjunct regions that are analysed using principal component analysis using the cross-validation principle. Several toy examples have been successfully analysed and the nonlinearity test has subsequently been applied to data from an internal combustion engine.
Resumo:
This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.
Resumo:
In this paper, we present a Statistical Shape Model for Human Figure Segmentation in gait sequences. Point Distribution Models (PDM) generally use Principal Component analysis (PCA) to describe the main directions of variation in the training set. However, PCA assumes a number of restrictions on the data that do not always hold. In this work, we explore the potential of Independent Component Analysis (ICA) as an alternative shape decomposition to the PDM-based Human Figure Segmentation. The shape model obtained enables accurate estimation of human figures despite segmentation errors in the input silhouettes and has really good convergence qualities.
Resumo:
Reducing wafer metrology continues to be a major target in semiconductor manufacturing efficiency initiatives due to it being a high cost, non-value added operation that impacts on cycle-time and throughput. However, metrology cannot be eliminated completely given the important role it plays in process monitoring and advanced process control. To achieve the required manufacturing precision, measurements are typically taken at multiple sites across a wafer. The selection of these sites is usually based on a priori knowledge of wafer failure patterns and spatial variability with additional sites added over time in response to process issues. As a result, it is often the case that in mature processes significant redundancy can exist in wafer measurement plans. This paper proposes a novel methodology based on Forward Selection Component Analysis (FSCA) for analyzing historical metrology data in order to determine the minimum set of wafer sites needed for process monitoring. The paper also introduces a virtual metrology (VM) based approach for reconstructing the complete wafer profile from the optimal sites identified by FSCA. The proposed methodology is tested and validated on a wafer manufacturing metrology dataset. © 2012 IEEE.
Resumo:
In the current context of serious climate changes, where the increase of the frequency of some extreme events occurrence can enhance the rate of periods prone to high intensity forest fires, the National Forest Authority often implements, in several Portuguese forest areas, a regular set of measures in order to control the amount of fuel mass availability (PNDFCI, 2008). In the present work we’ll present a preliminary analysis concerning the assessment of the consequences given by the implementation of prescribed fire measures to control the amount of fuel mass in soil recovery, in particular in terms of its water retention capacity, its organic matter content, pH and content of iron. This work is included in a larger study (Meira-Castro, 2009(a); Meira-Castro, 2009(b)). According to the established praxis on the data collection, embodied in multidimensional matrices of n columns (variables in analysis) by p lines (sampled areas at different depths), and also considering the quantitative data nature present in this study, we’ve chosen a methodological approach that considers the multivariate statistical analysis, in particular, the Principal Component Analysis (PCA ) (Góis, 2004). The experiments were carried out in a soil cover over a natural site of Andaluzitic schist, in Gramelas, Caminha, NW Portugal, who was able to maintain itself intact from prescribed burnings from four years and was submit to prescribed fire in March 2008. The soils samples were collected from five different plots at six different time periods. The methodological option that was adopted have allowed us to identify the most relevant relational structures inside the n variables, the p samples and in two sets at the same time (Garcia-Pereira, 1990). Consequently, and in addition to the traditional outputs produced from the PCA, we have analyzed the influence of both sampling depths and geomorphological environments in the behavior of all variables involved.
Resumo:
Statistical shape analysis techniques commonly employed in the medical imaging community, such as active shape models or active appearance models, rely on principal component analysis (PCA) to decompose shape variability into a reduced set of interpretable components. In this paper we propose principal factor analysis (PFA) as an alternative and complementary tool to PCA providing a decomposition into modes of variation that can be more easily interpretable, while still being a linear efficient technique that performs dimensionality reduction (as opposed to independent component analysis, ICA). The key difference between PFA and PCA is that PFA models covariance between variables, rather than the total variance in the data. The added value of PFA is illustrated on 2D landmark data of corpora callosa outlines. Then, a study of the 3D shape variability of the human left femur is performed. Finally, we report results on vector-valued 3D deformation fields resulting from non-rigid registration of ventricles in MRI of the brain.