924 resultados para L1-norm
Resumo:
Tensor analysis plays an important role in modern image and vision computing problems. Most of the existing tensor analysis approaches are based on the Frobenius norm, which makes them sensitive to outliers. In this paper, we propose L1-norm-based tensor analysis (TPCA-L1), which is robust to outliers. Experimental results upon face and other datasets demonstrate the advantages of the proposed approach.
Resumo:
In this paper, we first present a simple but effective L1-norm-based two-dimensional principal component analysis (2DPCA). Traditional L2-norm-based least squares criterion is sensitive to outliers, while the newly proposed L1-norm 2DPCA is robust. Experimental results demonstrate its advantages.
Resumo:
In this paper, we first present a simple but effective L1-norm-based two-dimensional principal component analysis (2DPCA). Traditional L2-norm-based least squares criterion is sensitive to outliers, while the newly proposed L1-norm 2DPCA is robust. Experimental results demonstrate its advantages. © 2006 IEEE.
Resumo:
Tensor analysis plays an important role in modern image and vision computing problems. Most of the existing tensor analysis approaches are based on the Frobenius norm, which makes them sensitive to outliers. In this paper, we propose L1-norm-based tensor analysis (TPCA-L1), which is robust to outliers. Experimental results upon face and other datasets demonstrate the advantages of the proposed approach. © 2006 IEEE.
Resumo:
We consider four-dimensional variational data assimilation (4DVar) and show that it can be interpreted as Tikhonov or L2-regularisation, a widely used method for solving ill-posed inverse problems. It is known from image restoration and geophysical problems that an alternative regularisation, namely L1-norm regularisation, recovers sharp edges better than L2-norm regularisation. We apply this idea to 4DVar for problems where shocks and model error are present and give two examples which show that L1-norm regularisation performs much better than the standard L2-norm regularisation in 4DVar.
Resumo:
The analysis of complex nonlinear systems is often carried out using simpler piecewise linear representations of them. A principled and practical technique is proposed to linearize and evaluate arbitrary continuous nonlinear functions using polygonal (continuous piecewise linear) models under the L1 norm. A thorough error analysis is developed to guide an optimal design of two kinds of polygonal approximations in the asymptotic case of a large budget of evaluation subintervals N. The method allows the user to obtain the level of linearization (N) for a target approximation error and vice versa. It is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), allowing real-time performance of computationally demanding applications. The quality and efficiency of the technique has been measured in detail on two nonlinear functions that are widely used in many areas of scientific computing and are expensive to evaluate.
Resumo:
Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.
Resumo:
This paper develops an algorithm for finding sparse signals from limited observations of a linear system. We assume an adaptive Gaussian model for sparse signals. This model results in a least square problem with an iteratively reweighted L2 penalty that approximates the L0-norm. We propose a fast algorithm to solve the problem within a continuation framework. In our examples, we show that the correct sparsity map and sparsity level are gradually learnt during the iterations even when the number of observations is reduced, or when observation noise is present. In addition, with the help of sophisticated interscale signal models, the algorithm is able to recover signals to a better accuracy and with reduced number of observations than typical L1-norm and reweighted L1 norm methods. ©2010 IEEE.
Resumo:
The function of seismic data in prospecting and exploring oil and gas has exceeded ascertaining structural configuration early. In order to determine the advantageous target area more exactly, we need exactly image the subsurface media. So prestack migration imaging especially prestack depth migration has been used increasingly widely. Currently, seismic migration imaging methods are mainly based on primary energy and most of migration methods use one-way wave equation. Multiple will mask primary and sometimes will be regarded as primary and interferes with the imaging of primary, so multiple elimination is still a very important research subject. At present there are three different wavefield prediction and subtraction methods: wavefield extrapolation; feedback loop; and inverse-scattering series. I mainly do research on feedback loop method in this paper. Feedback loop method includs prediction and subtraction.Currently this method has some problems as follows. Firstly, feedback loop method requires the seismic data used to predict multiple is full wavefield data, but usually the original seismic data don’t meet this assumption, so seismic data must be regularized. Secondly, Multiple predicted through feedback loop method usually can’t match the real multiple in seismic data and they are different in amplitude, phase and arrrival time. So we need match the predicted multiple and that in seismic data through estimating filtering factors and subtract multiple from seismic data. It is the key for multiple elimination how to select a correct matching filtering method. There are many matching filtering methods and I put emphasis on Least-square adaptive matching filtering and L1-norm minimizing adaptive matching filtering methods. Least-square adaptive matching filtering method is computationally very fast, but it has two assumptions: the signal has minimum energy and is orthogonal to the noise. When seismic data don’t meet the two assumptions, this method can’t get good matching results and then can’t attenuate multiple correctly. L1-norm adaptive matching filtering methods can avoid these two assumptions and then get good matching results, but this method is computationally a little slow. The results of my research are as follows: 1. Proposed a method that interpolates seismic traces based on F-K migration and demigration. The main advantage of this method is that it can interpolate seismic traces in any offsets. It shows this method is valid through a simple model. 2. Comparing different Least-square adaptive matching filtering methods. The results show that equipose multi-channel adaptive matching filtering methods can get better results of multiple elimination than other matcing methods through three model data and two field data. 3. Proposed equipose multi-channel L1-norm adaptive matching filtering method. Because L1-norm is robust to large amplitude differences, there are no assumption on the signal has minimum energy and orthogonality, this method can get better results of multiple elimination. 4. Research on multiple elimination in inverse data space. The method is a new multiple elimination method and it is different from those methods mentioned above.The advantages of this method is that it is simple in theory and no need for the adaptive subtraction and computationally very fast. The disadvantage of this method is that it is not stabilized in its solution. The results show that equipose multi-channel and equipose pesudo-multi-channel least-square matching filtering and equipose multi-channel and equipose pesudo-multi-channel L1-norm matching filtering methods can get better results of multiple elimination than other matcing methods through three model data and many field data.
Resumo:
Mixture of Gaussians (MoG) modelling [13] is a popular approach to background subtraction in video sequences. Although the algorithm shows good empirical performance, it lacks theoretical justification. In this paper, we give a justification for it from an online stochastic expectation maximization (EM) viewpoint and extend it to a general framework of regularized online classification EM for MoG with guaranteed convergence. By choosing a special regularization function, l1 norm, we derived a new set of updating equations for l1 regularized online MoG. It is shown empirically that l1 regularized online MoG converge faster than the original online MoG .
Resumo:
Classification methods with embedded feature selection capability are very appealing for the analysis of complex processes since they allow the analysis of root causes even when the number of input variables is high. In this work, we investigate the performance of three techniques for classification within a Monte Carlo strategy with the aim of root cause analysis. We consider the naive bayes classifier and the logistic regression model with two different implementations for controlling model complexity, namely, a LASSO-like implementation with a L1 norm regularization and a fully Bayesian implementation of the logistic model, the so called relevance vector machine. Several challenges can arise when estimating such models mainly linked to the characteristics of the data: a large number of input variables, high correlation among subsets of variables, the situation where the number of variables is higher than the number of available data points and the case of unbalanced datasets. Using an ecological and a semiconductor manufacturing dataset, we show advantages and drawbacks of each method, highlighting the superior performance in term of classification accuracy for the relevance vector machine with respect to the other classifiers. Moreover, we show how the combination of the proposed techniques and the Monte Carlo approach can be used to get more robust insights into the problem under analysis when faced with challenging modelling conditions.
Resumo:
Sparse representation based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using L1-norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximised. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection and weighted least squares techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a weighed least squares (WLS) method to obtain the representation coefficients. To further reduce the computational complexity, structurally random projection is used to reduce the dimensionality of the feature space while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods.
Resumo:
Ce mémoire a pour but d'étudier les propriétés des solutions à l'équation aux valeurs propres de l'opérateur de Laplace sur le disque lorsque les valeurs propres tendent vers l'in ni. En particulier, on s'intéresse au taux de croissance des normes ponctuelle et L1. Soit D le disque unitaire et @D sa frontière (le cercle unitaire). On s'inté- resse aux solutions de l'équation aux valeurs propres f = f avec soit des conditions frontières de Dirichlet (fj@D = 0), soit des conditions frontières de Neumann ( @f @nj@D = 0 ; notons que sur le disque, la dérivée normale est simplement la dérivée par rapport à la variable radiale : @ @n = @ @r ). Les fonctions propres correspondantes sont données par : f (r; ) = fn;m(r; ) = Jn(kn;mr)(Acos(n ) + B sin(n )) (Dirichlet) fN (r; ) = fN n;m(r; ) = Jn(k0 n;mr)(Acos(n ) + B sin(n )) (Neumann) où Jn est la fonction de Bessel de premier type d'ordre n, kn;m est son m- ième zéro et k0 n;m est le m-ième zéro de sa dérivée (ici on dénote les fonctions propres pour le problème de Dirichlet par f et celles pour le problème de Neumann par fN). Dans ce cas, on obtient que le spectre SpD( ) du laplacien sur D, c'est-à-dire l'ensemble de ses valeurs propres, est donné par : SpD( ) = f : f = fg = fk2 n;m : n = 0; 1; 2; : : :m = 1; 2; : : :g (Dirichlet) SpN D( ) = f : fN = fNg = fk0 n;m 2 : n = 0; 1; 2; : : :m = 1; 2; : : :g (Neumann) En n, on impose que nos fonctions propres soient normalisées par rapport à la norme L2 sur D, c'est-à-dire : R D F2 da = 1 (à partir de maintenant on utilise F pour noter les fonctions propres normalisées et f pour les fonctions propres quelconques). Sous ces conditions, on s'intéresse à déterminer le taux de croissance de la norme L1 des fonctions propres normalisées, notée jjF jj1, selon . Il est vi important de mentionner que la norme L1 d'une fonction sur un domaine correspond au maximum de sa valeur absolue sur le domaine. Notons que dépend de deux paramètres, m et n et que la dépendance entre et la norme L1 dépendra du rapport entre leurs taux de croissance. L'étude du comportement de la norme L1 est étroitement liée à l'étude de l'ensemble E(D) qui est l'ensemble des points d'accumulation de log(jjF jj1)= log : Notre principal résultat sera de montrer que [7=36; 1=4] E(B2) [1=18; 1=4]: Le mémoire est organisé comme suit. L'introdution et les résultats principaux sont présentés au chapitre 1. Au chapitre 2, on rappelle quelques faits biens connus concernant les fonctions propres du laplacien sur le disque et sur les fonctions de Bessel. Au chapitre 3, on prouve des résultats concernant la croissance de la norme ponctuelle des fonctions propres. On montre notamment que, si m=n ! 0, alors pour tout point donné (r; ) du disque, la valeur de F (r; ) décroit exponentiellement lorsque ! 1. Au chapitre 4, on montre plusieurs résultats sur la croissance de la norme L1. Le probl ème avec conditions frontières de Neumann est discuté au chapitre 5 et on présente quelques résultats numériques au chapitre 6. Une brève discussion et un sommaire de notre travail se trouve au chapitre 7.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.
Resumo:
The l1-norm sparsity constraint is a widely used technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach.