968 resultados para Sparse component analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of nonnegative blind source separation (NBSS) is addressed in this paper, where both the sources and the mixing matrix are nonnegative. Because many real-world signals are sparse, we deal with NBSS by sparse component analysis. First, a determinant-based sparseness measure, named D-measure, is introduced to gauge the temporal and spatial sparseness of signals. Based on this measure, a new NBSS model is derived, and an iterative sparseness maximization (ISM) approach is proposed to solve this model. In the ISM approach, the NBSS problem can be cast into row-to-row optimizations with respect to the unmixing matrix, and then the quadratic programming (QP) technique is used to optimize each row. Furthermore, we analyze the source identifiability and the computational complexity of the proposed ISM-QP method. The new method requires relatively weak conditions on the sources and the mixing matrix, has high computational efficiency, and is easy to implement. Simulation results demonstrate the effectiveness of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of nonnegative blind source separation (NBSS) is addressed in this paper, where both the sources and the mixing matrix are nonnegative. Because many real-world signals are sparse, we deal with NBSS by sparse component analysis. First, a determinant-based sparseness measure, named D-measure, is introduced to gauge the temporal and spatial sparseness of signals. Based on this measure, a new NBSS model is derived, and an iterative sparseness maximization (ISM) approach is proposed to solve this model. In the ISM approach, the NBSS problem can be cast into row-to-row optimizations with respect to the unmixing matrix, and then the quadratic programming (QP) technique is used to optimize each row. Furthermore, we analyze the source identifiability and the computational complexity of the proposed ISM-QP method. The new method requires relatively weak conditions on the sources and the mixing matrix, has high computational efficiency, and is easy to implement. Simulation results demonstrate the effectiveness of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed. © 2010 Michel Journée, Yurii Nesterov, Peter Richtárik and Rodolphe Sepulchre.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atrial fibrillation (AF) is a common heart disorder. One of the most prominent hypothesis about its initiation and maintenance considers multiple uncoordinated activation foci inside the atrium. However, the implicit assumption behind all the signal processing techniques used for AF, such as dominant frequency and organization analysis, is the existence of a single regular component in the observed signals. In this paper we take into account the existence of multiple foci, performing a spectral analysis to detect their number and frequencies. In order to obtain a cleaner signal on which the spectral analysis can be performed, we introduce sparsity-aware learning techniques to infer the spike trains corresponding to the activations. The good performance of the proposed algorithm is demonstrated both on synthetic and real data. RESUMEN. Algoritmo basado en técnicas de regresión dispersa para la extracción de las señales cardiacas en pacientes con fibrilación atrial (AF).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on a well-established stratigraphic framework and 47 AMS-14C dated sediment cores, the distribution of facies types on the NW Iberian margin is analysed in response to the last deglacial sea-level rise, thus providing a case study on the sedimentary evolution of a high-energy, low-accumulation shelf system. Altogether, four main types of sedimentary facies are defined. (1) A gravel-dominated facies occurs mostly as time-transgressive ravinement beds, which initially developed as shoreface and storm deposits in shallow waters on the outer shelf during the last sea-level lowstand; (2) A widespread, time-transgressive mixed siliceous/biogenic-carbonaceous sand facies indicates areas of moderate hydrodynamic regimes, high contribution of reworked shelf material, and fluvial supply to the shelf; (3) A glaucony-containing sand facies in a stationary position on the outer shelf formed mostly during the last-glacial sea-level rise by reworking of older deposits as well as authigenic mineral formation; and (4) A mud facies is mostly restricted to confined Holocene fine-grained depocentres, which are located in mid-shelf position. The observed spatial and temporal distribution of these facies types on the high-energy, low-accumulation NW Iberian shelf was essentially controlled by the local interplay of sediment supply, shelf morphology, and strength of the hydrodynamic system. These patterns are in contrast to high-accumulation systems where extensive sediment supply is the dominant factor on the facies distribution. This study emphasises the importance of large-scale erosion and material recycling on the sedimentary buildup during the deglacial drowning of the shelf. The presence of a homogenous and up to 15-m thick transgressive cover above a lag horizon contradicts the common assumption of sparse and laterally confined sediment accumulation on high-energy shelf systems during deglacial sea-level rise. In contrast to this extensive sand cover, laterally very confined and maximal 4-m thin mud depocentres developed during the Holocene sea-level highstand. This restricted formation of fine-grained depocentres was related to the combination of: (1) frequently occurring high-energy hydrodynamic conditions; (2) low overall terrigenous input by the adjacent rivers; and (3) the large distance of the Galicia Mud Belt to its main sediment supplier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For users of germplasm collections, the purpose of measuring characterization and evaluation descriptors, and subsequently using statistical methodology to summarize the data, is not only to interpret the relationships between the descriptors, but also to characterize the differences and similarities between accessions in relation to their phenotypic variability for each of the measured descriptors. The set of descriptors for the accessions of most germplasm collections consists of both numerical and categorical descriptors. This poses problems for a combined analysis of all descriptors because few statistical techniques deal with mixtures of measurement types. In this article, nonlinear principal component analysis was used to analyze the descriptors of the accessions in the Australian groundnut collection. It was demonstrated that the nonlinear variant of ordinary principal component analysis is an appropriate analytical tool because subspecies and botanical varieties could be identified on the basis of the analysis and characterized in terms of all descriptors. Moreover, outlying accessions could be easily spotted and their characteristics established. The statistical results and their interpretations provide users with a more efficient way to identify accessions of potential relevance for their plant improvement programs and encourage and improve the usefulness and utilization of germplasm collections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People’s beliefs about where society has come from and where it is going have personal and political consequences. Here, we conduct a detailed investigation of these beliefs through re-analyzing Kashima et al.’s (Study 2, n = 320) data from China, Australia, and Japan. Kashima et al. identified a “folk theory of social change” (FTSC) belief that people in society become more competent over time, but less warm and moral. Using three-mode principal components analysis, an under-utilized analytical method in psychology, we identified two additional narratives: Utopianism/Dystopianism (people becoming generally better or worse over time) and Expansion/Contraction (an increase/decrease in both positive and negative aspects of character over time). Countries differed in endorsement of these three narratives of societal change. Chinese endorsed the FTSC and Utopian narratives more than other countries, Japanese held Dystopian and Contraction beliefs more than other countries, and Australians’ narratives of societal change fell between Chinese and Japanese. Those who believed in greater economic/technological development held stronger FTSC and Expansion/Contraction narratives, but not Utopianism/Dystopianism. By identifying multiple cultural narratives about societal change, this research provides insights into how people across cultures perceive their social world and their visions of the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some statistical procedures already available in literature are employed in developing the water quality index, WQI. The nature of complexity and interdependency that occur in physical and chemical processes of water could be easier explained if statistical approaches were applied to water quality indexing. The most popular statistical method used in developing WQI is the principal component analysis (PCA). In literature, the WQI development based on the classical PCA mostly used water quality data that have been transformed and normalized. Outliers may be considered in or eliminated from the analysis. However, the classical mean and sample covariance matrix used in classical PCA methodology is not reliable if the outliers exist in the data. Since the presence of outliers may affect the computation of the principal component, robust principal component analysis, RPCA should be used. Focusing in Langat River, the RPCA-WQI was introduced for the first time in this study to re-calculate the DOE-WQI. Results show that the RPCA-WQI is capable to capture similar distribution in the existing DOE-WQI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pattern recognition is a promising approach for the identification of structural damage using measured dynamic data. Much of the research on pattern recognition has employed artificial neural networks (ANNs) and genetic algorithms as systematic ways of matching pattern features. The selection of a damage-sensitive and noise-insensitive pattern feature is important for all structural damage identification methods. Accordingly, a neural networks-based damage detection method using frequency response function (FRF) data is presented in this paper. This method can effectively consider uncertainties of measured data from which training patterns are generated. The proposed method reduces the dimension of the initial FRF data and transforms it into new damage indices and employs an ANN method for the actual damage localization and quantification using recognized damage patterns from the algorithm. In civil engineering applications, the measurement of dynamic response under field conditions always contains noise components from environmental factors. In order to evaluate the performance of the proposed strategy with noise polluted data, noise contaminated measurements are also introduced to the proposed algorithm. ANNs with optimal architecture give minimum training and testing errors and provide precise damage detection results. In order to maximize damage detection results, the optimal architecture of ANN is identified by defining the number of hidden layers and the number of neurons per hidden layer by a trial and error method. In real testing, the number of measurement points and the measurement locations to obtain the structure response are critical for damage detection. Therefore, optimal sensor placement to improve damage identification is also investigated herein. A finite element model of a two storey framed structure is used to train the neural network. It shows accurate performance and gives low error with simulated and noise-contaminated data for single and multiple damage cases. As a result, the proposed method can be used for structural health monitoring and damage detection, particularly for cases where the measurement data is very large. Furthermore, it is suggested that an optimal ANN architecture can detect damage occurrence with good accuracy and can provide damage quantification with reasonable accuracy under varying levels of damage.