887 resultados para principle component analysis
Resumo:
This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: (1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; (2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and (3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.
Resumo:
This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: 1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; 2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and 3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.
Resumo:
Based on a well-established stratigraphic framework and 47 AMS-14C dated sediment cores, the distribution of facies types on the NW Iberian margin is analysed in response to the last deglacial sea-level rise, thus providing a case study on the sedimentary evolution of a high-energy, low-accumulation shelf system. Altogether, four main types of sedimentary facies are defined. (1) A gravel-dominated facies occurs mostly as time-transgressive ravinement beds, which initially developed as shoreface and storm deposits in shallow waters on the outer shelf during the last sea-level lowstand; (2) A widespread, time-transgressive mixed siliceous/biogenic-carbonaceous sand facies indicates areas of moderate hydrodynamic regimes, high contribution of reworked shelf material, and fluvial supply to the shelf; (3) A glaucony-containing sand facies in a stationary position on the outer shelf formed mostly during the last-glacial sea-level rise by reworking of older deposits as well as authigenic mineral formation; and (4) A mud facies is mostly restricted to confined Holocene fine-grained depocentres, which are located in mid-shelf position. The observed spatial and temporal distribution of these facies types on the high-energy, low-accumulation NW Iberian shelf was essentially controlled by the local interplay of sediment supply, shelf morphology, and strength of the hydrodynamic system. These patterns are in contrast to high-accumulation systems where extensive sediment supply is the dominant factor on the facies distribution. This study emphasises the importance of large-scale erosion and material recycling on the sedimentary buildup during the deglacial drowning of the shelf. The presence of a homogenous and up to 15-m thick transgressive cover above a lag horizon contradicts the common assumption of sparse and laterally confined sediment accumulation on high-energy shelf systems during deglacial sea-level rise. In contrast to this extensive sand cover, laterally very confined and maximal 4-m thin mud depocentres developed during the Holocene sea-level highstand. This restricted formation of fine-grained depocentres was related to the combination of: (1) frequently occurring high-energy hydrodynamic conditions; (2) low overall terrigenous input by the adjacent rivers; and (3) the large distance of the Galicia Mud Belt to its main sediment supplier.
Resumo:
Finite-Differences Time-Domain (FDTD) algorithms are well established tools of computational electromagnetism. Because of their practical implementation as computer codes, they are affected by many numerical artefact and noise. In order to obtain better results we propose using Principal Component Analysis (PCA) based on multivariate statistical techniques. The PCA has been successfully used for the analysis of noise and spatial temporal structure in a sequence of images. It allows a straightforward discrimination between the numerical noise and the actual electromagnetic variables, and the quantitative estimation of their respective contributions. Besides, The GDTD results can be filtered to clean the effect of the noise. In this contribution we will show how the method can be applied to several FDTD simulations: the propagation of a pulse in vacuum, the analysis of two-dimensional photonic crystals. In this last case, PCA has revealed hidden electromagnetic structures related to actual modes of the photonic crystal.
Resumo:
Speckle is being used as a characterization tool for the analysis of the dynamic of slow varying phenomena occurring in biological and industrial samples. The retrieved data takes the form of a sequence of speckle images. The analysis of these images should reveal the inner dynamic of the biological or physical process taking place in the sample. Very recently, it has been shown that principal component analysis is able to split the original data set in a collection of classes. These classes can be related with the dynamic of the observed phenomena. At the same time, statistical descriptors of biospeckle images have been used to retrieve information on the characteristics of the sample. These statistical descriptors can be calculated in almost real time and provide a fast monitoring of the sample. On the other hand, principal component analysis requires longer computation time but the results contain more information related with spatial-temporal pattern that can be identified with physical process. This contribution merges both descriptions and uses principal component analysis as a pre-processing tool to obtain a collection of filtered images where a simpler statistical descriptor can be calculated. The method has been applied to slow-varying biological and industrial processes
Resumo:
© 2015 Society for Psychophysiological Research. The authors would like to thank Renate Zahn and Karolin Meiß for their assistance conducting the recordings. This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation; DFG), grant number MU 972/16-1.
Resumo:
The analysis of white latex paint is a problem for forensic laboratories because of difficulty in differentiation between samples. Current methods provide limited information that is not suitable for discrimination. Elemental analysis of white latex paints has resulted in 99% discriminating power when using LA-ICP-MS; however, mass spectrometers can be prohibitively expensive and require a skilled operator. A quick, inexpensive, effective method is needed for the differentiation of white latex paints. In this study, LIBS is used to analyze 24 white latex paint samples. LIBS is fast, easy to operate, and has a low cost. Results show that 98.1% of variation can be accounted for via principle component analysis, while Tukey pairwise comparisons differentiated 95.6% with potassium as the elemental ratio, showing that the discrimination capabilities of LIBS are comparable to those of LA-ICP-MS. Due to the many advantages of LIBS, this instrument should be considered a necessity for forensic laboratories.
Resumo:
LOPES-DOS-SANTOS, V. , CONDE-OCAZIONEZ, S. ; NICOLELIS, M. A. L. , RIBEIRO, S. T. , TORT, A. B. L. . Neuronal assembly detection and cell membership specification by principal component analysis. Plos One, v. 6, p. e20996, 2011.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
LOPES-DOS-SANTOS, V. , CONDE-OCAZIONEZ, S. ; NICOLELIS, M. A. L. , RIBEIRO, S. T. , TORT, A. B. L. . Neuronal assembly detection and cell membership specification by principal component analysis. Plos One, v. 6, p. e20996, 2011.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.
Resumo:
This paper characterizes humic substances (HS) extracted from soil samples collected in the Rio Negro basin in the state of Amazonas, Brazil, particularly investigating their reduction capabilities towards Hg(II) in order to elucidate potential mercury cycling/volatilization in this environment. For this reason, a multimethod approach was used, consisting of both instrumental methods (elemental analysis, EPR, solid-state NMR, FIA combined with cold-vapor AAS of Hg(0)) and statistical methods such as principal component analysis (PCA) and a central composite factorial planning method. The HS under study were divided into groups, complexing and reducing ones, owing to different distribution of their functionalities. The main functionalities (cor)related with reduction of Hg(II) were phenolic, carboxylic and amide groups, while the groups related with complexation of Hg(II) were ethers, hydroxyls, aldehydes and ketones. The HS extracted from floodable regions of the Rio Negro basin presented a greater capacity to retain (to complex, to adsorb physically and/or chemically) Hg(II), while nonfloodable regions showed a greater capacity to reduce Hg(II), indicating that HS extracted from different types of regions contribute in different ways to the biogeochemical mercury cycle in the basin of the mid-Rio Negro, AM, Brazil. (c) 2007 Published by Elsevier B.V.
Resumo:
As one of the newest members in the field of articial immune systems (AIS), the Dendritic Cell Algorithm (DCA) is based on behavioural models of natural dendritic cells (DCs). Unlike other AIS, the DCA does not rely on training data, instead domain or expert knowledge is required to predetermine the mapping between input signals from a particular instance to the three categories used by the DCA. This data preprocessing phase has received the criticism of having manually over-fitted the data to the algorithm, which is undesirable. Therefore, in this paper we have attempted to ascertain if it is possible to use principal component analysis (PCA) techniques to automatically categorise input data while still generating useful and accurate classication results. The integrated system is tested with a biometrics dataset for the stress recognition of automobile drivers. The experimental results have shown the application of PCA to the DCA for the purpose of automated data preprocessing is successful.
Resumo:
Using a Markov switching unobserved component model we decompose the term premium of the North American CDX index into a permanent and a stationary component. We establish that the inversion of the CDX term premium is induced by sudden changes in the unobserved stationary component, which represents the evolution of the fundamentals underpinning the probability of default in the economy. We find evidence that the monetary policy response from the Fed during the crisis period was effective in reducing the volatility of the term premium. We also show that equity returns make a substantial contribution to the term premium over the entire sample period.