219 resultados para singular value decomposition (SVD)
em Indian Institute of Science - Bangalore - Índia
Resumo:
Purpose: A computationally efficient algorithm (linear iterative type) based on singular value decomposition (SVD) of the Jacobian has been developed that can be used in rapid dynamic near-infrared (NIR) diffuse optical tomography. Methods: Numerical and experimental studies have been conducted to prove the computational efficacy of this SVD-based algorithm over conventional optical image reconstruction algorithms. Results: These studies indicate that the performance of linear iterative algorithms in terms of contrast recovery (quantitation of optical images) is better compared to nonlinear iterative (conventional) algorithms, provided the initial guess is close to the actual solution. The nonlinear algorithms can provide better quality images compared to the linear iterative type algorithms. Moreover, the analytical and numerical equivalence of the SVD-based algorithm to linear iterative algorithms was also established as a part of this work. It is also demonstrated that the SVD-based image reconstruction typically requires O(NN2) operations per iteration, as contrasted with linear and nonlinear iterative methods that, respectively, requir O(NN3) and O(NN6) operations, with ``NN'' being the number of unknown parameters in the optical image reconstruction procedure. Conclusions: This SVD-based computationally efficient algorithm can make the integration of image reconstruction procedure with the data acquisition feasible, in turn making the rapid dynamic NIR tomography viable in the clinic to continuously monitor hemodynamic changes in the tissue pathophysiology.
Resumo:
We consider a slow fading multiple-input multiple-output (MIMO) system with channel state information at both the transmitter and receiver. A well-known precoding scheme is based upon the singular value decomposition (SVD) of the channel matrix, which transforms the MIMO channel into parallel subchannels. Despite having low maximum likelihood decoding (MLD) complexity, this SVD precoding scheme provides a diversity gain which is limited by the diversity gain of the weakest subchannel. We therefore propose X- and Y-Codes, which improve the diversity gain of the SVD precoding scheme but maintain the low MLD complexity, by jointly coding information across a pair of subchannels. In particular, subchannels with high diversity gain are paired with those having low diversity gain. A pair of subchannels is jointly encoded using a 2 2 real matrix, which is fixed a priori and does not change with each channel realization. For X-Codes, these rotation matrices are parameterized by a single angle, while for Y-Codes, these matrices are left triangular matrices. Moreover, we propose X-, Y-Precoders with the same structure as X-, Y-Codes, but with encoding matrices adapted to each channel realization. We observed that X-Codes/Precoders are good for well-conditioned channels, while Y-Codes/Precoders are good for ill-conditioned channels.
Resumo:
We consider Gaussian multiple-input multiple-output (MIMO) channels with discrete input alphabets. We propose a non-diagonal precoder based on X-Codes in to increase the mutual information. The MIMO channel is transformed into a set of parallel subchannels using Singular Value Decomposition (SVD) and X-codes are then used to pair the subchannels. X-Codes are fully characterized by the pairings and the 2 × 2 real rotation matrices for each pair (parameterized with a single angle). This precoding structure enables to express the total mutual information as a sum of the mutual information of all the pairs. The problem of finding the optimal precoder with the above structure, which maximizes the total mutual information, is equivalent to i) optimizing the rotation angle and the power allocation within each pair and ii) finding the optimal pairing and power allocation among the pairs. It is shown that the mutual information achieved with the proposed pairing scheme is very close to that achieved with the optimal precoder by Cruz et al., and significantly better than mercury/waterfilling strategy by Lozano et al.. Our approach greatly simplifies both the precoder optimization and the detection complexity, making it suitable for practical applications.
Resumo:
We consider a time division duplex multiple-input multiple-output (nt × nr MIMO). Using channel state information (CSI) at the transmitter, singular value decomposition (SVD) of the channel matrix is performed. This transforms the MIMO channel into parallel subchannels, but has a low overall diversity order. Hence, we propose X-Codes which achieve a higher diversity order by pairing the subchannels, prior to SVD preceding. In particular, each pair of information symbols is encoded by a fixed 2 × 2 real rotation matrix. X-Codes can be decoded using nr very low complexity two-dimensional real sphere decoders. Error probability analysis for X-Codes enables us to choose the optimal pairing and the optimal rotation angle for each pair. Finally, we show that our new scheme outperforms other low complexity precoding schemes.
Resumo:
We consider Gaussian multiple-input multiple-output (MIMO) channels with discrete input alphabets. We propose a non-diagonal precoder based on the X-Codes in 1] to increase the mutual information. The MIMO channel is transformed into a set of parallel subchannels using singular value decomposition (SVD) and X-Codes are then used to pair the subchannels. X-Codes are fully characterized by the pairings and a 2 x 2 real rotation matrix for each pair (parameterized with a single angle). This precoding structure enables us to express the total mutual information as a sum of the mutual information of all the pairs. The problem of finding the optimal precoder with the above structure, which maximizes the total mutual information, is solved by: i) optimizing the rotation angle and the power allocation within each pair and ii) finding the optimal pairing and power allocation among the pairs. It is shown that the mutual information achieved with the proposed pairing scheme is very close to that achieved with the optimal precoder by Cruz et al., and is significantly better than Mercury/waterfilling strategy by Lozano et al. Our approach greatly simplifies both the precoder optimization and the detection complexity, making it suitable for practical applications.
Resumo:
Precoding for multiple-input multiple-output (MIMO) antenna systems is considered with perfect channel knowledge available at both the transmitter and the receiver. For two transmit antennas and QAM constellations, a real-valued precoder which is approximately optimal (with respect to the minimum Euclidean distance between points in the received signal space) among real-valued precoders based on the singular value decomposition (SVD) of the channel is proposed. The proposed precoder is obtainable easily for arbitrary QAM constellations, unlike the known complex-valued optimal precoder by Collin et al. for two transmit antennas which is in existence for 4-QAM alone and is extremely hard to obtain for larger QAM constellations. The proposed precoding scheme is extended to higher number of transmit antennas on the lines of the E - d(min) precoder for 4-QAM by Vrigneau et al. which is an extension of the complex-valued optimal precoder for 4-QAM. The proposed precoder's ML-decoding complexity as a function of the constellation size M is only O(root M)while that of the E - d(min) precoder is O(M root M)(M = 4). Compared to the recently proposed X- and Y-precoders, the error performance of the proposed precoder is significantly better while being only marginally worse than that of the E - d(min) precoder for 4-QAM. It is argued that the proposed precoder provides full-diversity for QAM constellations and this is supported by simulation plots of the word error probability for 2 x 2, 4 x 4 and 8 x 8 systems.
Resumo:
Consider N points in R-d and M local coordinate systems that are related through unknown rigid transforms. For each point, we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the N points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The least-squares formulation of this problem, although nonconvex, has a well-known closed-form solution when M = 2 (based on the singular value decomposition (SVD)). However, no closed-form solution is known for M >= 3. In this paper, we demonstrate how the least-squares formulation can be relaxed into a convex program, namely, a semidefinite program (SDP). By setting up connections between the uniqueness of this SDP and results from rigidity theory, we prove conditions for exact and stable recovery for the SDP relaxation. In particular, we prove that the SDP relaxation can guarantee recovery under more adversarial conditions compared to earlier proposed spectral relaxations, and we derive error bounds for the registration error incurred by the SDP relaxation. We also present results of numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the SDP (i.e., we are able to solve the original nonconvex least-squares problem) up to a certain noise threshold, and (b) the SDP performs significantly better than spectral and manifold-optimization methods, particularly at large noise levels.
Resumo:
A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.
Resumo:
The standard approach to signal reconstruction in frequency-domain optical-coherence tomography (FDOCT) is to apply the inverse Fourier transform to the measurements. This technique offers limited resolution (due to Heisenberg's uncertainty principle). We propose a new super-resolution reconstruction method based on a parametric representation. We consider multilayer specimens, wherein each layer has a constant refractive index and show that the backscattered signal from such a specimen fits accurately in to the framework of finite-rate-of-innovation (FRI) signal model and is represented by a finite number of free parameters. We deploy the high-resolution Prony method and show that high-quality, super-resolved reconstruction is possible with fewer measurements (about one-fourth of the number required for the standard Fourier technique). To further improve robustness to noise in practical scenarios, we take advantage of an iterated singular-value decomposition algorithm (Cadzow denoiser). We present results of Monte Carlo analyses, and assess statistical efficiency of the reconstruction techniques by comparing their performance against the Cramer-Rao bound. Reconstruction results on experimental data obtained from technical as well as biological specimens show a distinct improvement in resolution and signal-to-reconstruction noise offered by the proposed method in comparison with the standard approach.
Resumo:
In this paper, we consider the problem of power allocation in MIMO wiretap channel for secrecy in the presence of multiple eavesdroppers. Perfect knowledge of the destination channel state information (CSI) and only the statistical knowledge of the eavesdroppers CSI are assumed. We first consider the MIMO wiretap channel with Gaussian input. Using Jensen's inequality, we transform the secrecy rate max-min optimization problem to a single maximization problem. We use generalized singular value decomposition and transform the problem to a concave maximization problem which maximizes the sum secrecy rate of scalar wiretap channels subject to linear constraints on the transmit covariance matrix. We then consider the MIMO wiretap channel with finite-alphabet input. We show that the transmit covariance matrix obtained for the case of Gaussian input, when used in the MIMO wiretap channel with finite-alphabet input, can lead to zero secrecy rate at high transmit powers. We then propose a power allocation scheme with an additional power constraint which alleviates this secrecy rate loss problem, and gives non-zero secrecy rates at high transmit powers.
Resumo:
Given an n x n complex matrix A, let mu(A)(x, y) := 1/n vertical bar{1 <= i <= n, Re lambda(i) <= x, Im lambda(i) <= y}vertical bar be the empirical spectral distribution (ESD) of its eigenvalues lambda(i) is an element of C, i = l, ... , n. We consider the limiting distribution (both in probability and in the almost sure convergence sense) of the normalized ESD mu(1/root n An) of a random matrix A(n) = (a(ij))(1 <= i, j <= n), where the random variables a(ij) - E(a(ij)) are i.i.d. copies of a fixed random variable x with unit variance. We prove a universality principle for such ensembles, namely, that the limit distribution in question is independent of the actual choice of x. In particular, in order to compute this distribution, one can assume that x is real or complex Gaussian. As a related result, we show how laws for this ESD follow from laws for the singular value distribution of 1/root n A(n) - zI for complex z. As a corollary, we establish the circular law conjecture (both almost surely and in probability), which asserts that mu(1/root n An) converges to the uniform measure on the unit disc when the a(ij) have zero mean.
Resumo:
An important tool in signal processing is the use of eigenvalue and singular value decompositions for extracting information from time-series/sensor array data. These tools are used in the so-called subspace methods that underlie solutions to the harmonic retrieval problem in time series and the directions-of-arrival (DOA) estimation problem in array processing. The subspace methods require the knowledge of eigenvectors of the underlying covariance matrix to estimate the parameters of interest. Eigenstructure estimation in signal processing has two important classes: (i) estimating the eigenstructure of the given covariance matrix and (ii) updating the eigenstructure estimates given the current estimate and new data. In this paper, we survey some algorithms for both these classes useful for harmonic retrieval and DOA estimation problems. We begin by surveying key results in the literature and then describe, in some detail, energy function minimization approaches that underlie a class of feedback neural networks. Our approaches estimate some or all of the eigenvectors corresponding to the repeated minimum eigenvalue and also multiple orthogonal eigenvectors corresponding to the ordered eigenvalues of the covariance matrix. Our presentation includes some supporting analysis and simulation results. We may point out here that eigensubspace estimation is a vast area and all aspects of this cannot be fully covered in a single paper. (C) 1995 Academic Press, Inc.
Resumo:
With deregulation, the total transfer capability (TTC) calculation, which is the basis for evaluating available transfer capability (ATC), has become very significant. TTC is an important index in power markets with large volume of inter-area power exchanges and wheeling transactions taking place on an hourly basis. Its computation helps to achieve a viable technical and commercial transmission operation. The aim of the paper is to evaluate TTC in the interconnections and also to improve it using reactive optimization technique and UPFC devices. Computations are carried out for normal and contingency cases such as single line, tie line and generator outages. Base and optimized results are presented, and the results show how reactive optimization and unified power flow controller help to improve the system conditions. In this paper repeated power flow method is used to calculate TTC due to its ease of implementation. A case study is carried out on a 205 bus equivalent system, a part of Indian Southern grid. Parameters like voltage magnitude, L-index, minimum singular value and MW losses are computed to analyze the system performance.
Resumo:
Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]
Resumo:
We introduce a family of domains-which we call the -quotients-associated with an aspect of -synthesis. We show that the natural association that the symmetrized polydisc has with the corresponding spectral unit ball is also exhibited by the -quotient and its associated unit `` -ball''. Here, is the structured singular value for the case Specifically: we show that, for such an E, the Nevanlinna-Pick interpolation problem with matricial data in a unit `` -ball'', and in general position in a precise sense, is equivalent to a Nevanlinna-Pick interpolation problem for the associated -quotient. Along the way, we present some characterizations for the -quotients.