925 resultados para probabilistic principal component analysis (probabilistic PCA)
Resumo:
Principal components analysis (PCA) has been described for over 50 years; however, it is rarely applied to the analysis of epidemiological data. In this study PCA was critically appraised in its ability to reveal relationships between pulsed-field gel electrophoresis (PFGE) profiles of methicillin- resistant Staphylococcus aureus (MRSA) in comparison to the more commonly employed cluster analysis and representation by dendrograms. The PFGE type following SmaI chromosomal digest was determined for 44 multidrug-resistant hospital-acquired methicillin-resistant S. aureus (MR-HA-MRSA) isolates, two multidrug-resistant community-acquired MRSA (MR-CA-MRSA), 50 hospital-acquired MRSA (HA-MRSA) isolates (from the University Hospital Birmingham, NHS Trust, UK) and 34 community-acquired MRSA (CA-MRSA) isolates (from general practitioners in Birmingham, UK). Strain relatedness was determined using Dice band-matching with UPGMA clustering and PCA. The results indicated that PCA revealed relationships between MRSA strains, which were more strongly correlated with known epidemiology, most likely because, unlike cluster analysis, PCA does not have the constraint of generating a hierarchic classification. In addition, PCA provides the opportunity for further analysis to identify key polymorphic bands within complex genotypic profiles, which is not always possible with dendrograms. Here we provide a detailed description of a PCA method for the analysis of PFGE profiles to complement further the epidemiological study of infectious disease. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Ten cases of neuronal intermediate filament inclusion disease (NIFID) were studied quantitatively. The α-internexin positive neurofilament inclusions (NI) were most abundant in the motor cortex and CA sectors of the hippocampus. The densities of the NI and the swollen achromatic neurons (SN) were similar in laminae II/III and V/VI but glial cell density was greater in V/VI. The density of the NI was positively correlated with the SN and the glial cells. Principal components analysis (PCA) suggested that PC1 was associated with variation in neuronal loss in the frontal/temporal lobes and PC2 with neuronal loss in the frontal lobe and NI density in the parahippocampal gyrus. The data suggest: 1) frontal and temporal lobe degeneration in NIFID is associated with the widespread formation of NI and SN, 2) NI and SN affect cortical laminae II/III and V/VI, 3) the NI and SN affect closely related neuronal populations, and 4) variations in neuronal loss and in the density of NI were the most important sources of pathological heterogeneity. © Springer-Verlag 2005.
Resumo:
In Statnotes 24 and 25, multiple linear regression, a statistical method that examines the relationship between a single dependent variable (Y) and two or more independent variables (X), was described. The principle objective of such an analysis was to determine which of the X variables had a significant influence on Y and to construct an equation that predicts Y from the X variables. ‘Principal components analysis’ (PCA) and ‘factor analysis’ (FA) are also methods of examining the relationships between different variables but they differ from multiple regression in that no distinction is made between the dependent and independent variables, all variables being essentially treated the same. Originally, PCA and FA were regarded as distinct methods but in recent times they have been combined into a single analysis, PCA often being the first stage of a FA. The basic objective of a PCA/FA is to examine the relationships between the variables or the ‘structure’ of the variables and to determine whether these relationships can be explained by a smaller number of ‘factors’. This statnote describes the use of PCA/FA in the analysis of the differences between the DNA profiles of different MRSA strains introduced in Statnote 26.
Resumo:
PCA/FA is a method of analyzing complex data sets in which there are no clearly defined X or Y variables. It has multiple uses including the study of the pattern of variation between individual entities such as patients with particular disorders and the detailed study of descriptive variables. In most applications, variables are related to a smaller number of ‘factors’ or PCs that account for the maximum variance in the data and hence, may explain important trends among the variables. An increasingly important application of the method is in the ‘validation’ of questionnaires that attempt to relate subjective aspects of a patients experience with more objective measures of vision.
Resumo:
Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^
Resumo:
Inductively Coupled Plasma Optical Emission Spectrometry was used to determine Ca, Mg, Mn, Fe, Zn and Cu in samples of processed and natural coconut water. The sample preparation consisted in a filtration step followed by a dilution. The analysis was made employing optimized instrumental parameters and the results were evaluated using methods of Pattern Recognition. The data showed common concentration values for the analytes present in processed and natural samples. Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) indicated that the samples of different kinds were statistically different when the concentrations of all the analytes were considered simultaneously.
Resumo:
Objectives: The aim of this work was to verify the differentiation between normal and pathological human carotid artery tissues by using fluorescence and reflectance spectroscopy in the 400- to 700-nm range and the spectral characterization by means of principal components analysis. Background Data: Atherosclerosis is the most common and serious pathology of the cardiovascular system. Principal components represent the main spectral characteristics that occur within the spectral data and could be used for tissue classification. Materials and Methods: Sixty postmortem carotid artery fragments (26 non-atherosclerotic and 34 atherosclerotic with non-calcified plaques) were studied. The excitation radiation consisted of a 488-nm argon laser. Two 600-mu m core optical fibers were used, one for excitation and one to collect the fluorescence radiation from the samples. The reflectance system was composed of a halogen lamp coupled to an excitation fiber positioned in one of the ports of an integrating sphere that delivered 5 mW to the sample. The photo-reflectance signal was coupled to a 1/4-m spectrograph via an optical fiber. Euclidean distance was then used to classify each principal component score into one of two classes, normal and atherosclerotic tissue, for both fluorescence and reflectance. Results: The principal components analysis allowed classification of the samples with 81% sensitivity and 88% specificity for fluorescence, and 81% sensitivity and 91% specificity for reflectance. Conclusions: Our results showed that principal components analysis could be applied to differentiate between normal and atherosclerotic tissue with high sensitivity and specificity.
Resumo:
Fatty acid synthase (FASN) is the metabolic enzyme responsible for the endogenous synthesis of the saturated long-chain fatty acid palmitate. In contrast to most normal cells, FASN is overexpressed in a variety of human cancers including cutaneous melanoma, in which its levels of expression are associated with a poor prognosis and depth of invasion. Recently, we have demonstrated the mitochondrial involvement in FASN inhibition-induced apoptosis in melanoma cells. Herein we compare, via electrospray ionization mass spectrometry (ESI-MS), free fatty acids (FFA) composition of mitochondria isolated from control (EtOH-treated cells) and Orlistat-treated B16-F10 mouse melanoma cells. Principal component analysis (PCA) was applied to the ESI-MS data and found to separate the two groups of samples. Mitochondria from control cells showed predominance of six ions, that is, those of m/z 157 (Pelargonic, 9:0), 255 (Palmitic, 16:0), 281 (Oleic, 18:1), 311 (Arachidic, 20:0), 327 (Docosahexaenoic, 22:6) and 339 (Behenic, 22:0). In contrast, FASN inhibition with Orlistat changes significantly mitochondrial FFA composition by reducing synthesis of palmitic acid, and its elongation and unsaturation products, such as arachidic and behenic acids, and oleic acid, respectively. ESI-MS of mitochondria isolated from Orlistat-treated cells presented therefore three major ions of m/z 157 (Pelargonic, 9:0), 193 (unknown) and 199 (Lauric, 12:0). These findings demonstrate therefore that FASN inhibition by Orlistat induces significant changes in the FFA composition of mitochondria. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
Fault detection and isolation (FDI) are important steps in the monitoring and supervision of industrial processes. Biological wastewater treatment (WWT) plants are difficult to model, and hence to monitor, because of the complexity of the biological reactions and because plant influent and disturbances are highly variable and/or unmeasured. Multivariate statistical models have been developed for a wide variety of situations over the past few decades, proving successful in many applications. In this paper we develop a new monitoring algorithm based on Principal Components Analysis (PCA). It can be seen equivalently as making Multiscale PCA (MSPCA) adaptive, or as a multiscale decomposition of adaptive PCA. Adaptive Multiscale PCA (AdMSPCA) exploits the changing multivariate relationships between variables at different time-scales. Adaptation of scale PCA models over time permits them to follow the evolution of the process, inputs or disturbances. Performance of AdMSPCA and adaptive PCA on a real WWT data set is compared and contrasted. The most significant difference observed was the ability of AdMSPCA to adapt to a much wider range of changes. This was mainly due to the flexibility afforded by allowing each scale model to adapt whenever it did not signal an abnormal event at that scale. Relative detection speeds were examined only summarily, but seemed to depend on the characteristics of the faults/disturbances. The results of the algorithms were similar for sudden changes, but AdMSPCA appeared more sensitive to slower changes.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
In this paper a new PCA-based positioning sensor and localization system for mobile robots to operate in unstructured environments (e. g. industry, services, domestic ...) is proposed and experimentally validated. The inexpensive positioning system resorts to principal component analysis (PCA) of images acquired by a video camera installed onboard, looking upwards to the ceiling. This solution has the advantage of avoiding the need of selecting and extracting features. The principal components of the acquired images are compared with previously registered images, stored in a reduced onboard image database, and the position measured is fused with odometry data. The optimal estimates of position and slippage are provided by Kalman filters, with global stable error dynamics. The experimental validation reported in this work focuses on the results of a set of experiments carried out in a real environment, where the robot travels along a lawn-mower trajectory. A small position error estimate with bounded co-variance was always observed, for arbitrarily long experiments, and slippage was estimated accurately in real time.
Resumo:
In this paper a new method for self-localization of mobile robots, based on a PCA positioning sensor to operate in unstructured environments, is proposed and experimentally validated. The proposed PCA extension is able to perform the eigenvectors computation from a set of signals corrupted by missing data. The sensor package considered in this work contains a 2D depth sensor pointed upwards to the ceiling, providing depth images with missing data. The positioning sensor obtained is then integrated in a Linear Parameter Varying mobile robot model to obtain a self-localization system, based on linear Kalman filters, with globally stable position error estimates. A study consisting in adding synthetic random corrupted data to the captured depth images revealed that this extended PCA technique is able to reconstruct the signals, with improved accuracy. The self-localization system obtained is assessed in unstructured environments and the methodologies are validated even in the case of varying illumination conditions.
Resumo:
The ecotoxicological response of the living organisms in an aquatic system depends on the physical, chemical and bacteriological variables, as well as the interactions between them. An important challenge to scientists is to understand the interaction and behaviour of factors involved in a multidimensional process such as the ecotoxicological response.With this aim, multiple linear regression (MLR) and principal component regression were applied to the ecotoxicity bioassay response of Chlorella vulgaris and Vibrio fischeri in water collected at seven sites of Leça river during five monitoring campaigns (February, May, June, August and September of 2006). The river water characterization included the analysis of 22 physicochemical and 3 microbiological parameters. The model that best fitted the data was MLR, which shows: (i) a negative correlation with dissolved organic carbon, zinc and manganese, and a positive one with turbidity and arsenic, regarding C. vulgaris toxic response; (ii) a negative correlation with conductivity and turbidity and a positive one with phosphorus, hardness, iron, mercury, arsenic and faecal coliforms, concerning V. fischeri toxic response. This integrated assessment may allow the evaluation of the effect of future pollution abatement measures over the water quality of Leça River.
Resumo:
This study aims to optimize the water quality monitoring of a polluted watercourse (Leça River, Portugal) through the principal component analysis (PCA) and cluster analysis (CA). These statistical methodologies were applied to physicochemical, bacteriological and ecotoxicological data (with the marine bacterium Vibrio fischeri and the green alga Chlorella vulgaris) obtained with the analysis of water samples monthly collected at seven monitoring sites and during five campaigns (February, May, June, August, and September 2006). The results of some variables were assigned to water quality classes according to national guidelines. Chemical and bacteriological quality data led to classify Leça River water quality as “bad” or “very bad”. PCA and CA identified monitoring sites with similar pollution pattern, giving to site 1 (located in the upstream stretch of the river) a distinct feature from all other sampling sites downstream. Ecotoxicity results corroborated this classification thus revealing differences in space and time. The present study includes not only physical, chemical and bacteriological but also ecotoxicological parameters, which broadens new perspectives in river water characterization. Moreover, the application of PCA and CA is very useful to optimize water quality monitoring networks, defining the minimum number of sites and their location. Thus, these tools can support appropriate management decisions.