848 resultados para singular values
Resumo:
Biplot graphics are widely employed in the study of the genotypeenvironment interactions, but they are only a graphical tool without a statistical hypothesis test. The singular values and scores (singular vectors) used in biplots correspond to specific estimates of its parameters, and the use of uncertainty measures may lead to different conclusions from those provided by a simple visual evaluation. The aim of this work is to estimate the genotype-environment interactions, using AMMI analysis, through Bayesian approach. Therefore the credibility intervals can be used for decision-making in different situations of analyses. It allows to verify the consistency of the selection and recommendation of cultivars. Two analyses were performed. The first analysis looked into 10 regular commercial hybrids and all possible 45 hybrids obtained from them. They were assessed in 15 locations. The second analysis evaluated 28 hybrids in 35 different environments, with imbalance data. The ellipses were grouped according to the standard of interaction in the biplot. The AMMI analysis with a Bayesian approach proved to be a complete analysis of stability and adaptability, which provides important information that may help the breeder in their decisions. The regions of credibility, built in the biplots, allow to perform an accurate selection and a precise genotype recommendation, with a level of credibility. Genotypes and environments can be grouped according to the existing interaction pattern, which makes possible to formulate specific recommendations. Moreover the environments can be evaluated, in order to find out which ones contribute similarly to the interaction and those to be discarted. The method makes possible to deal with imbalanced data in a natural way, showing efficiency for multienvironment trials. The prediction takes into account instability and the interaction standard of the observed data, in order to establish a direct comparison between genotypes of both 1st and 2nd seasons.
Resumo:
Neste trabalho, a decomposição em valores singulares (DVS) de uma matriz A, n x m, que representa a anomalia magnética, é vista como um método de filtragem bidimensional de coerência que separa informações correlacionáveis e não correlacionáveis contidas na matriz de dados magnéticos A. O filtro DVS é definido através da expansão da matriz A em autoimagens e valores singulares. Cada autoimagem é dada pelo produto escalar dos vetores de base, autovetores, associados aos problemas de autovalor e autovetor das matrizes de covariância ATA e AAT. Este método de filtragem se baseia no fato de que as autoimagens associadas a grandes valores singulares concentram a maior parte da informação correlacionável presente nos dados, enquanto que a parte não correlacionada, presumidamente constituída de ruídos causados por fontes magnéticas externas, ruídos introduzidos pelo processo de medida, estão concentrados nas autoimagens restantes. Utilizamos este método em diferentes exemplos de dados magnéticos sintéticos. Posteriormente, o método foi aplicado a dados do aerolevantamento feito pela PETROBRÁS no Projeto Carauari-Norte (Bacia do Solimões), para analisarmos a potencialidade deste na identificação, eliminação ou atenuação de ruídos e como um possível método de realçar feições particulares da anomalia geradas por fontes profundas e rasas. Este trabalho apresenta também a possibilidade de introduzir um deslocamento estático ou dinâmico nos perfis magnéticos, com a finalidade de aumentar a correlação (coerência) entre eles, permitindo assim concentrar o máximo possível do sinal correlacionável nas poucas primeiras autoimagens. Outro aspecto muito importante desta expansão da matriz de dados em autoimagens e valores singulares foi o de mostrar, sob o ponto de vista computacional, que a armazenagem dos dados contidos na matriz, que exige uma quantidade n x m de endereços de memória, pode ser diminuída consideravelmente utilizando p autoimagens. Assim o número de endereços de memória cai para p x (n + m + 1), sem alterar a anomalia, na reprodução praticamente perfeita. Dessa forma, concluímos que uma escolha apropriada do número e dos índices das autoimagens usadas na decomposição mostra potencialidade do método no processamento de dados magnéticos.
Resumo:
A new approach called the Modified Barrier Lagrangian Function (MBLF) to solve the Optimal Reactive Power Flow problem is presented. In this approach, the inequality constraints are treated by the Modified Barrier Function (MBF) method, which has a finite convergence property: i.e. the optimal solution in the MBF method can actually be in the bound of the feasible set. Hence, the inequality constraints can be precisely equal to zero. Another property of the MBF method is that the barrier parameter does not need to be driven to zero to attain the solution. Therefore, the conditioning of the involved Hessian matrix is greatly enhanced. In order to show this, a comparative analysis of the numeric conditioning of the Hessian matrix of the MBLF approach, by the decomposition in singular values, is carried out. The feasibility of the proposed approach is also demonstrated with comparative tests to Interior Point Method (IPM) using various IEEE test systems and two networks derived from Brazilian generation/transmission system. The results show that the MBLF method is computationally more attractive than the IPM in terms of speed, number of iterations and numerical conditioning. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A basic approach to study a NVH problem is to break down the system in three basic elements – source, path and receiver. While the receiver (response) and the transfer path can be measured, it is difficult to measure the source (forces) acting on the system. It becomes necessary to predict these forces to know how they influence the responses. This requires inverting the transfer path. Singular Value Decomposition (SVD) method is used to decompose the transfer path matrix into its principle components which is required for the inversion. The usual approach to force prediction requires rejecting the small singular values obtained during SVD by setting a threshold, as these small values dominate the inverse matrix. This assumption of the threshold may be subjected to rejecting important singular values severely affecting force prediction. The new approach discussed in this report looks at the column space of the transfer path matrix which is the basis for the predicted response. The response participation is an indication of how the small singular values influence the force participation. The ability to accurately reconstruct the response vector is important to establish a confidence in force vector prediction. The goal of this report is to suggest a solution that is mathematically feasible, physically meaningful, and numerically more efficient through examples. This understanding adds new insight to the effects of current code and how to apply algorithms and understanding to new codes.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.
Resumo:
FEAST is a recently developed eigenvalue algorithm which computes selected interior eigenvalues of real symmetric matrices. It uses contour integral resolvent based projections. A weakness is that the existing algorithm relies on accurate reasoned estimates of the number of eigenvalues within the contour. Examining the singular values of the projections on moderately-sized, randomly-generated test problems motivates orthogonalization-based improvements to the algorithm. The singular value distributions provide experimentally robust estimates of the number of eigenvalues within the contour. The algorithm is modified to handle both Hermitian and general complex matrices. The original algorithm (based on circular contours and Gauss-Legendre quadrature) is extended to contours and quadrature schemes that are recursively subdividable. A general complex recursive algorithm is implemented on rectangular and diamond contours. The accuracy of different quadrature schemes for various contours is investigated.
Resumo:
We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.
Resumo:
The estimation of modal parameters of a structure from ambient measurements has attracted the attention of many researchers in the last years. The procedure is now well established and the use of state space models, stochastic system identification methods and stabilization diagrams allows to identify the modes of the structure. In this paper the contribution of each identified mode to the measured vibration is discussed. This modal contribution is computed using the Kalman filter and it is an indicator of the importance of the modes. Also the variation of the modal contribution with the order of the model is studied. This analysis suggests selecting the order for the state space model as the order that includes the modes with higher contribution. The order obtained using this method is compared to those obtained using other well known methods, like Akaike criteria for time series or the singular values of the weighted projection matrix in the Stochastic Subspace Identification method. Finally, both simulated and measured vibration data are used to show the practicability of the derived technique. Finally, it is important to remark that the method can be used with any identification method working in the state space model.
Resumo:
This paper explores a new method of analysing muscle fatigue within the muscles predominantly used during microsurgery. The captured electromyographic (EMG) data retrieved from these muscles are analysed for any defining patterns relating to muscle fatigue. The analysis consists of dynamically embedding the EMG signals from a single muscle channel into an embedded matrix. The muscle fatigue is determined by defining its entropy characterized by the singular values of the dynamically embedded (DE) matrix. The paper compares this new method with the traditional method of using mean frequency shifts in the EMG signal's power spectral density. Linear regressions are fitted to the results from both methods, and the coefficients of variation of both their slope and point of intercept are determined. It is shown that the complexity method is slightly more robust in that the coefficient of variation for the DE method has lower variability than the conventional method of mean frequency analysis.
Resumo:
The objectives of this research are to analyze and develop a modified Principal Component Analysis (PCA) and to develop a two-dimensional PCA with applications in image processing. PCA is a classical multivariate technique where its mathematical treatment is purely based on the eigensystem of positive-definite symmetric matrices. Its main function is to statistically transform a set of correlated variables to a new set of uncorrelated variables over $\IR\sp{n}$ by retaining most of the variations present in the original variables.^ The variances of the Principal Components (PCs) obtained from the modified PCA form a correlation matrix of the original variables. The decomposition of this correlation matrix into a diagonal matrix produces a set of orthonormal basis that can be used to linearly transform the given PCs. It is this linear transformation that reproduces the original variables. The two-dimensional PCA can be devised as a two successive of one-dimensional PCA. It can be shown that, for an $m\times n$ matrix, the PCs obtained from the two-dimensional PCA are the singular values of that matrix.^ In this research, several applications for image analysis based on PCA are developed, i.e., edge detection, feature extraction, and multi-resolution PCA decomposition and reconstruction. ^
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
We show that L2-bounded singular integrals in metric spaces with respect to general measures and kernels converge weakly. This implies a kind of average convergence almost everywhere. For measures with zero density we prove the almost everywhere existence of principal values.
Resumo:
Minkowski's ?(x) function can be seen as the confrontation of two number systems: regular continued fractions and the alternated dyadic system. This way of looking at it permits us to prove that its derivative, as it also happens for many other non-decreasing singular functions from [0,1] to [0,1], when it exists can only attain two values: zero and infinity. It is also proved that if the average of the partial quotients in the continued fraction expansion of x is greater than k* =5.31972, and ?'(x) exists then ?'(x)=0. In the same way, if the same average is less than k**=2 log2(F), where F is the golden ratio, then ?'(x)=infinity. Finally some results are presented concerning metric properties of continued fraction and alternated dyadic expansions.
Resumo:
In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.
Resumo:
In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems that have a singular measure. Using the block maxima approach described in Faranda et al. [2011] we show that, numerically, the Extreme Value distribution for these maps can be associated to the Generalised Extreme Value family where the parameters scale with the information dimension. The numerical analysis are performed on a few low dimensional maps. For the middle third Cantor set and the Sierpinskij triangle obtained using Iterated Function Systems, experimental parameters show a very good agreement with the theoretical values. For strange attractors like Lozi and H\`enon maps a slower convergence to the Generalised Extreme Value distribution is observed. Even in presence of large statistics the observed convergence is slower if compared with the maps which have an absolute continuous invariant measure. Nevertheless and within the uncertainty computed range, the results are in good agreement with the theoretical estimates.