818 resultados para Linear matrix inequalities (LMI) techniques


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lateral or transaxial truncation of cone-beam data can occur either due to the field of view limitation of the scanning apparatus or iregion-of-interest tomography. In this paper, we Suggest two new methods to handle lateral truncation in helical scan CT. It is seen that reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view. A row-by-row data completion approach using linear prediction is introduced for helical scan truncated data. An extension of this technique known as windowed linear prediction approach is introduced. Efficacy of the two techniques are shown using simulation with standard phantoms. A quantitative image quality measure of the resulting reconstructed images are used to evaluate the performance of the proposed methods against an extension of a standard existing technique.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Estimation of secondary structure in polypeptides is important for studying their structure, folding and dynamics. In NMR spectroscopy, such information is generally obtained after sequence specific resonance assignments are completed. We present here a new methodology for assignment of secondary structure type to spin systems in proteins directly from NMR spectra, without prior knowledge of resonance assignments. The methodology, named Combination of Shifts for Secondary Structure Identification in Proteins (CSSI-PRO), involves detection of specific linear combination of backbone H-1(alpha) and C-13' chemical shifts in a two-dimensional (2D) NMR experiment based on G-matrix Fourier transform (GFT) NMR spectroscopy. Such linear combinations of shifts facilitate editing of residues belonging to alpha-helical/beta-strand regions into distinct spectral regions nearly independent of the amino acid type, thereby allowing the estimation of overall secondary structure content of the protein. Comparison of the predicted secondary structure content with those estimated based on their respective 3D structures and/or the method of Chemical Shift Index for 237 proteins gives a correlation of more than 90% and an overall rmsd of 7.0%, which is comparable to other biophysical techniques used for structural characterization of proteins. Taken together, this methodology has a wide range of applications in NMR spectroscopy such as rapid protein structure determination, monitoring conformational changes in protein-folding/ligand-binding studies and automated resonance assignment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have used the density matrix renormalization group (DMRG) method to study the linear and nonlinear optical responses of first generation nitrogen based dendrimers with donor acceptor groups. We have employed Pariser–Parr–Pople Hamiltonian to model the interacting pi electrons in these systems. Within the DMRG method we have used an innovative scheme to target excited states with large transition dipole to the ground state. This method reproduces exact optical gaps and polarization in systems where exact diagonalization of the Hamiltonian is possible. We have used a correction vector method which tacitly takes into account the contribution of all excited states, to obtain the ground state polarizibility, first hyperpolarizibility, and two photon absorption cross sections. We find that the lowest optical excitations as well as the lowest excited triplet states are localized. It is interesting to note that the first hyperpolarizibility saturates more rapidly with system size compared to linear polarizibility unlike that of linear polyenes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An algorithm that uses integer arithmetic is suggested. It transforms anm ×n matrix to a diagonal form (of the structure of Smith Normal Form). Then it computes a reflexive generalized inverse of the matrix exactly and hence solves a system of linear equations error-free.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A simple and efficient algorithm for the bandwidth reduction of sparse symmetric matrices is proposed. It involves column-row permutations and is well-suited to map onto the linear array topology of the SIMD architectures. The efficiency of the algorithm is compared with the other existing algorithms. The interconnectivity and the memory requirement of the linear array are discussed and the complexity of its layout area is derived. The parallel version of the algorithm mapped onto the linear array is then introduced and is explained with the help of an example. The optimality of the parallel algorithm is proved by deriving the time complexities of the algorithm on a single processor and the linear array.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Gauss and Fourier have together provided us with the essential techniques for symbolic computation with linear arithmetic constraints over the reals and the rationals. These variable elimination techniques for linear constraints have particular significance in the context of constraint logic programming languages that have been developed in recent years. Variable elimination in linear equations (Guassian Elimination) is a fundamental technique in computational linear algebra and is therefore quite familiar to most of us. Elimination in linear inequalities (Fourier Elimination), on the other hand, is intimately related to polyhedral theory and aspects of linear programming that are not quite as familiar. In addition, the high complexity of elimination in inequalities has forces the consideration of intricate specializations of Fourier's original method. The intent of this survey article is to acquaint the reader with these connections and developments. The latter part of the article dwells on the thesis that variable elimination in linear constraints over the reals extends quite naturally to constraints in certain discrete domains.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The eigenvalue assignment/pole placement procedure has found application in a wide variety of control problems. The associated literature is rather extensive with a number of techniques discussed to that end. In this paper a method for assigning eigenvalues to a Linear Time Invariant (LTI) single input system is proposed. The algorithm determines a matrix, which has eigenvalues at the desired locations. It is obtained from the knowledge of the open-loop system and the desired eigenvalues. Solution of the matrix equation, involving unknown controller gains, open-loop system matrices and desired eigenvalues, results in the state feedback controller. The proposed algorithm requires the closed-loop eigenvalues to be different from those of the open-loop case. This apparent constraint is easily overcome by a negligible shift in the values. Two examples are considered to verify the proposed algorithm. The first one pertains to the in-plane libration of a Tethered Satellite System (TSS) while the second is concerned with control of the short period dynamics of a flexible airplane. Finally, the method is extended to determine the Controllability Grammian, corresponding to the specified closed-loop eigenvalues, without computing the controller gains.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The eigenvalue assignment/pole placement procedure has found application in a wide variety of control problems. The associated literature is rather extensive with a number of techniques discussed to that end. In this paper a method for assigning eigenvalues to a Linear Time Invariant (LTI) single input system is proposed. The algorithm determines a matrix, which has eigenvalues at the desired locations. It is obtained from the knowledge of the open-loop system and the desired eigenvalues. Solution of the matrix equation, involving unknown controller gains, open-loop system matrices and desired eigenvalues, results in the state feedback controller. The proposed algorithm requires the closed-loop eigenvalues to be different from those of the open-loop case. This apparent constraint is easily overcome by a negligible shift in the values. Two examples are considered to verify the proposed algorithm. The first one pertains to the in-plane libration of a Tethered Satellite System (TSS) while the second is concerned with control of the short period dynamics of a flexible airplane. Finally, the method is extended to determine the Controllability Grammian, corresponding to the specified closed-loop eigenvalues, without computing the controller gains.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A unique code (called Hensel's code) is derived for a rational number by truncating its infinite p-adic expansion. The four basic arithmetic algorithms for these codes are described and their application to rational matrix computations is demonstrated by solving a system of linear equations exactly, using the Gaussian elimination procedure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work intends to demonstrate the importance of a geometrically nonlinear cross-sectional analysis of certain composite beam-based four-bar mechanisms in predicting system dynamic characteristics. All component bars of the mechanism are made of fiber reinforced laminates and have thin rectangular cross-sections. They could, in general, be pre-twisted and/or possess initial curvature, either by design or by defect. They are linked to each other by means of revolute joints. We restrict ourselves to linear materials with small strains within each elastic body (beam). Each component of the mechanism is modeled as a beam based on geometrically non-linear 3-D elasticity theory. The component problems are thus split into 2-D analyses of reference beam cross-sections and non-linear 1-D analyses along the three beam reference curves. For the thin rectangular cross-sections considered here, the 2-D cross-sectional non-linearity is also overwhelming. This can be perceived from the fact that such sections constitute a limiting case between thin-walled open and closed sections, thus inviting the non-linear phenomena observed in both. The strong elastic couplings of anisotropic composite laminates complicate the model further. However, a powerful mathematical tool called the Variational Asymptotic Method (VAM) not only enables such a dimensional reduction, but also provides asymptotically correct analytical solutions to the non-linear cross-sectional analysis. Such closed-form solutions are used here in conjunction with numerical techniques for the rest of the problem to predict multi-body dynamic responses more quickly and accurately than would otherwise be possible. The analysis methodology can be viewed as a three-step procedure: First, the cross-sectional properties of each bar of the mechanism is determined analytically based on an asymptotic procedure, starting from Classical Laminated Shell Theory (CLST) and taking advantage of its thin strip geometry. Second, the dynamic response of the non-linear, flexible four-bar mechanism is simulated by treating each bar as a 1-D beam, discretized using finite elements, and employing energy-preserving and -decaying time integration schemes for unconditional stability. Finally, local 3-D deformations and stresses in the entire system are recovered, based on the 1-D responses predicted in the previous step. With the model, tools and procedure in place, we identify and investigate a few four-bar mechanism problems where the cross-sectional non-linearities are significant in predicting better and critical system dynamic characteristics. This is carried out by varying stacking sequences (i.e. the arrangement of ply orientations within a laminate) and material properties, and speculating on the dominating diagonal and coupling terms in the closed-form non-linear beam stiffness matrix. A numerical example is presented which illustrates the importance of 2-D cross-sectional non-linearities and the behavior of the system is also observed by using commercial software (I-DEAS + NASTRAN + ADAMS). (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.

Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.

Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.

The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.

In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.