968 resultados para singular value decomposition (SVD)
Resumo:
Two jamming cancellation algorithms are developed based on a stable solution of least squares problem (LSP) provided by regularization. They are based on filtered singular value decomposition (SVD) and modifications of the Greville formula. Both algorithms allow an efficient hardware implementation. Testing results on artificial data modeling difficult real-world situations are also provided.
Resumo:
Matrix factorization (MF) has evolved as one of the better practice to handle sparse data in field of recommender systems. Funk singular value decomposition (SVD) is a variant of MF that exists as state-of-the-art method that enabled winning the Netflix prize competition. The method is widely used with modifications in present day research in field of recommender systems. With the potential of data points to grow at very high velocity, it is prudent to devise newer methods that can handle such data accurately as well as efficiently than Funk-SVD in the context of recommender system. In view of the growing data points, I propose a latent factor model that caters to both accuracy and efficiency by reducing the number of latent features of either users or items making it less complex than Funk-SVD, where latent features of both users and items are equal and often larger. A comprehensive empirical evaluation of accuracy on two publicly available, amazon and ml-100 k datasets reveals the comparable accuracy and lesser complexity of proposed methods than Funk-SVD.
Resumo:
A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of-reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced-assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.
Resumo:
A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of- reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced- assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.
Resumo:
In this article, we describe a novel methodology to extract semantic characteristics from protein structures using linear algebra in order to compose structural signature vectors which may be used efficiently to compare and classify protein structures into fold families. These signatures are built from the pattern of hydrophobic intrachain interactions using Singular Value Decomposition (SVD) and Latent Semantic Indexing (LSI) techniques. Considering proteins as documents and contacts as terms, we have built a retrieval system which is able to find conserved contacts in samples of myoglobin fold family and to retrieve these proteins among proteins of varied folds with precision of up to 80%. The classifier is a web tool available at our laboratory website. Users can search for similar chains from a specific PDB, view and compare their contact maps and browse their structures using a JMol plug-in.
Resumo:
In this paper, a Decimative Spectral estimation method based on Eigenanalysis and SVD (Singular Value Decomposition) is presented and applied to speech signals in order to estimate Formant/Bandwidth values. The underlying model decomposes a signal into complex damped sinusoids. The algorithm is applied not only on speech samples but on a small amount of the autocorrelation coefficients of a speech frame as well, for finer estimation. Correct estimation of Formant/Bandwidth values depend on the model order thus, the requested number of poles. Overall, experimentation results indicate that the proposed methodology successfully estimates formant trajectories and their respective bandwidths.
Resumo:
基于奇异值分解和能量最小原则,提出了一种自适应图像降噪算法,并给出了基于有界变差的能量降噪模型的代数形式。通过在矩阵范数意义下求能量最小,自适应确定去噪图像重构的奇异值个数。该算法的特点是将能量最小法则和奇异值分解结合起来,在代数空间中建立了一种自适应的图像降噪算法。与基于压缩比和奇异值分解的降噪方法相比,由于该算法避免了图像压缩比函数及其拐点的计算,因此具有快速去噪和简单可行的优点。实验结果证明,该算法是有效的。
Resumo:
This paper is concerned with the universal (blind) image steganalysis problem and introduces a novel method to detect especially spatial domain steganographic methods. The proposed steganalyzer models linear dependencies of image rows/columns in local neighborhoods using singular value decomposition transform and employs content independency provided by a Wiener filtering process. Experimental results show that the novel method has superior performance when compared with its counterparts in terms of spatial domain steganography. Experiments also demonstrate the reasonable ability of the method to detect discrete cosine transform-based steganography as well as the perturbation quantization method.
Resumo:
Multiuser multiple-input multiple-output (MIMO) downlink (DL) transmission schemes experience both multiuser interference as well as inter-antenna interference. The singular value decomposition provides an appropriate mean to process channel information and allows us to take the individual user’s channel characteristics into account rather than treating all users channels jointly as in zero-forcing (ZF) multiuser transmission techniques. However, uncorrelated MIMO channels has attracted a lot of attention and reached a state of maturity. By contrast, the performance analysis in the presence of antenna fading correlation, which decreases the channel capacity, requires substantial further research. The joint optimization of the number of activated MIMO layers and the number of bits per symbol along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers has to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.
Resumo:
A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.
Resumo:
The objective of this paper is to propose a signal processing scheme that employs subspace-based spectral analysis for the purpose of formant estimation of speech signals. Specifically, the scheme is based on decimative spectral estimation that uses Eigenanalysis and SVD (Singular Value Decomposition). The underlying model assumes a decomposition of the processed signal into complex damped sinusoids. In the case of formant tracking, the algorithm is applied on a small amount of the autocorrelation coefficients of a speech frame. The proposed scheme is evaluated on both artificial and real speech utterances from the TIMIT database. For the first case, comparative results to standard methods are provided which indicate that the proposed methodology successfully estimates formant trajectories.
Resumo:
In this paper, we propose a deterministic column-based matrix decomposition method. Conventional column-based matrix decomposition (CX) computes the columns by randomly sampling columns of the data matrix. Instead, the newly proposed method (termed as CX_D) selects columns in a deterministic manner, which well approximates singular value decomposition. The experimental results well demonstrate the power and the advantages of the proposed method upon three real-world data sets.
Resumo:
The modeling formula based on seismic wavelet can well simulate zero - phase wavelet and hybrid-phase wavelet, and approximate maximal - phase and minimal - phase wavelet in a certain sense. The modeling wavelet can be used as wavelet function after suitable modification item added to meet some conditions. On the basis of the modified Morlet wavelet, the derivative wavelet function has been derived. As a basic wavelet, it can be sued for high resolution frequency - division processing and instantaneous feature extraction, in acoordance with the signal expanding characters in time and scale domains by each wavelet structured. Finally, an application example proves the effectiveness and reasonability of the method. Based on the analysis of SVD (Singular Value Decomposition) filter, by taking wavelet as basic wavelet and combining SVD filter and wavelet transform, a new de - noising method, which is Based on multi - dimension and multi-space de - noising method, is proposed. The implementation of this method is discussed the detail. Theoretical analysis and modeling show that the method has strong capacity of de - noising and keeping attributes of effective wave. It is a good tool for de - noising when the S/N ratio is poor. To give prominence to high frequency information of reflection event of important layer and to take account of other frequency information under processing seismic data, it is difficult for deconvolution filter to realize this goal. A filter from Fourier Transform has some problems for realizing the goal. In this paper, a new method is put forward, that is a method of processing seismic data in frequency division from wavelet transform and reconstruction. In ordinary seismic processing methods for resolution improvement, deconvolution operator has poor part characteristics, thus influencing the operator frequency. In wavelet transform, wavelet function has very good part characteristics. Frequency - division data processing in wavelet transform also brings quite good high resolution data, but it needs more time than deconvolution method does. On the basis of frequency - division processing method in wavelet domain, a new technique is put forward, which involves 1) designing filter operators equivalent to deconvolution operator in time and frequency domains in wavelet transform, 2) obtaining derivative wavelet function that is suitable to high - resolution seismic data processing, and 3) processing high resolution seismic data by deconvolution method in time domain. In the method of producing some instantaneous characteristic signals by using Hilbert transform, Hilbert transform is very sensitive to high - frequency random noise. As a result, even though there exist weak high - frequency noises in seismic signals, the obtained instantaneous characteristics of seismic signals may be still submerged by the noises. One method for having instantaneous characteristics of seismic signals in wavelet domain is put forward, which obtains directly the instantaneous characteristics of seismic signals by taking the characteristics of both the real part (real signals, namely seismic signals) and the imaginary part (the Hilbert transfom of real signals) of wavelet transform. The method has the functions of frequency division and noise removal. What is more, the weak wave whose frequency is lower than that of high - frequency random noise is retained in the obtained instantaneous characteristics of seismic signals, and the weak wave may be seen in instantaneous characteristic sections (such as instantaneous frequency, instantaneous phase and instantaneous amplitude). Impedance inversion is one of tools in the description of oil reservoir. one of methods in impedance inversion is Generalized Linear Inversion. This method has higher precision of inversion. But, this method is sensitive to noise of seismic data, so that error results are got. The description of oil reservoir in researching important geological layer, in order to give prominence to geological characteristics of the important layer, not only high frequency impedance to research thin sand layer, but other frequency impedance are needed. It is difficult for some impedance inversion method to realize the goal. Wavelet transform is very good in denoising and processing in frequency division. Therefore, in the paper, a method of impedance inversion is put forward based on wavelet transform, that is impedance inversion in frequency division from wavelet transform and reconstruction. in this paper, based on wavelet transform, methods of time - frequency analysis is given. Fanally, methods above are in application on real oil field - Sansan oil field.
Resumo:
This paper presents an analyze of numeric conditioning of the Hessian matrix of Lagrangian of modified barrier function Lagrangian method (MBFL) and primal-dual logarithmic barrier method (PDLB), which are obtained in the process of solution of an optimal power flow problem (OPF). This analyze is done by a comparative study through the singular values decomposition (SVD) of those matrixes. In the MBLF method the inequality constraints are treated by the modified barrier and PDLB methods. The inequality constraints are transformed into equalities by introducing positive auxiliary variables and are perturbed by the barrier parameter. The first-order necessary conditions of the Lagrangian function are solved by Newton's method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, allowing the limits of the inequality constraints to be reached. The electric systems IEEE 14, 162 and 300 buses were used in the comparative analysis. ©2007 IEEE.
Resumo:
Neste trabalho, a decomposição em valores singulares (DVS) de uma matriz A, n x m, que representa a anomalia magnética, é vista como um método de filtragem bidimensional de coerência que separa informações correlacionáveis e não correlacionáveis contidas na matriz de dados magnéticos A. O filtro DVS é definido através da expansão da matriz A em autoimagens e valores singulares. Cada autoimagem é dada pelo produto escalar dos vetores de base, autovetores, associados aos problemas de autovalor e autovetor das matrizes de covariância ATA e AAT. Este método de filtragem se baseia no fato de que as autoimagens associadas a grandes valores singulares concentram a maior parte da informação correlacionável presente nos dados, enquanto que a parte não correlacionada, presumidamente constituída de ruídos causados por fontes magnéticas externas, ruídos introduzidos pelo processo de medida, estão concentrados nas autoimagens restantes. Utilizamos este método em diferentes exemplos de dados magnéticos sintéticos. Posteriormente, o método foi aplicado a dados do aerolevantamento feito pela PETROBRÁS no Projeto Carauari-Norte (Bacia do Solimões), para analisarmos a potencialidade deste na identificação, eliminação ou atenuação de ruídos e como um possível método de realçar feições particulares da anomalia geradas por fontes profundas e rasas. Este trabalho apresenta também a possibilidade de introduzir um deslocamento estático ou dinâmico nos perfis magnéticos, com a finalidade de aumentar a correlação (coerência) entre eles, permitindo assim concentrar o máximo possível do sinal correlacionável nas poucas primeiras autoimagens. Outro aspecto muito importante desta expansão da matriz de dados em autoimagens e valores singulares foi o de mostrar, sob o ponto de vista computacional, que a armazenagem dos dados contidos na matriz, que exige uma quantidade n x m de endereços de memória, pode ser diminuída consideravelmente utilizando p autoimagens. Assim o número de endereços de memória cai para p x (n + m + 1), sem alterar a anomalia, na reprodução praticamente perfeita. Dessa forma, concluímos que uma escolha apropriada do número e dos índices das autoimagens usadas na decomposição mostra potencialidade do método no processamento de dados magnéticos.