90 resultados para Linear matrix inequalities (LMI) techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, expressions for convolution multiplication properties of MDCT are derived starting from the equivalent DFT representations. Using these expressions, methods for implementing linear filtering through block convolution in the MDCT domain are presented. The implementation is exact for symmetric filters and approximate for non-symmetric filters in the case of rectangular window based MDCT. For a general MDCT window function, the filtering is done on the windowed segments and hence the convolution is approximate for symmetric as well as non-symmetric filters. This approximation error is shown to be perceptually insignificant for symmetric impulse response filters. Moreover, the inherent $50 \%$ overlap between adjacent frames used in MDCT computation does reduce this approximation error similar to smoothing of other block processing errors. The presented techniques are useful for compressed domain processing of audio signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work intends to demonstrate the importance of geometrically nonlinear crosssectional analysis of certain composite beam-based four-bar mechanisms in predicting system dynamic characteristics. All component bars of the mechanism are made of fiber reinforced laminates and have thin rectangular cross-sections. They could, in general, be pre-twisted and/or possess initial curvature, either by design or by defect. They are linked to each other by means of revolute joints. We restrict ourselves to linear materials with small strains within each elastic body (beam). Each component of the mechanism is modeled as a beam based on geometrically nonlinear 3-D elasticity theory. The component problems are thus split into 2-D analyses of reference beam cross-sections and nonlinear 1-D analyses along the four beam reference curves. For thin rectangular cross-sections considered here, the 2-D cross-sectional nonlinearity is overwhelming. This can be perceived from the fact that such sections constitute a limiting case between thin-walled open and closed sections, thus inviting the nonlinear phenomena observed in both. The strong elastic couplings of anisotropic composite laminates complicate the model further. However, a powerful mathematical tool called the Variational Asymptotic Method (VAM) not only enables such a dimensional reduction, but also provides asymptotically correct analytical solutions to the nonlinear cross-sectional analysis. Such closed-form solutions are used here in conjunction with numerical techniques for the rest of the problem to predict multi-body dynamic responses, more quickly and accurately than would otherwise be possible. The analysis methodology can be viewed as a three-step procedure: First, the cross-sectional properties of each bar of the mechanism is determined analytically based on an asymptotic procedure, starting from Classical Laminated Shell Theory (CLST) and taking advantage of its thin strip geometry. Second, the dynamic response of the nonlinear, flexible fourbar mechanism is simulated by treating each bar as a 1-D beam, discretized using finite elements, and employing energy-preserving and -decaying time integration schemes for unconditional stability. Finally, local 3-D deformations and stresses in the entire system are recovered, based on the 1-D responses predicted in the previous step. With the model, tools and procedure in place, we shall attempt to identify and investigate a few problems where the cross-sectional nonlinearities are significant. This will be carried out by varying stacking sequences and material properties, and speculating on the dominating diagonal and coupling terms in the closed-form nonlinear beam stiffness matrix. Numerical examples will be presented and results from this analysis will be compared with those available in the literature, for linear cross-sectional analysis and isotropic materials as special cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the introduction of 2D flat-panel X-ray detectors, 3D image reconstruction using helical cone-beam tomography is fast replacing the conventional 2D reconstruction techniques. In 3D image reconstruction, the source orbit or scanning geometry should satisfy the data sufficiency or completeness condition for exact reconstruction. The helical scan geometry satisfies this condition and hence can give exact reconstruction. The theoretically exact helical cone-beam reconstruction algorithm proposed by Katsevich is a breakthrough and has attracted interest in the 3D reconstruction using helical cone-beam Computed Tomography.In many practical situations, the available projection data is incomplete. One such case is where the detector plane does not completely cover the full extent of the object being imaged in lateral direction resulting in truncated projections. This result in artifacts that mask small features near to the periphery of the ROI when reconstructed using the convolution back projection (CBP) method assuming that the projection data is complete. A number of techniques exist which deal with completion of missing data followed by the CBP reconstruction. In 2D, linear prediction (LP)extrapolation has been shown to be efficient for data completion, involving minimal assumptions on the nature of the data, producing smooth extensions of the missing projection data.In this paper, we propose to extend the LP approach for extrapolating helical cone beam truncated data. The projection on the multi row flat panel detectors has missing columns towards either ends in the lateral direction in truncated data situation. The available data from each detector row is modeled using a linear predictor. The available data is extrapolated and this completed projection data is backprojected using the Katsevich algorithm. Simulation results show the efficacy of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dial-a-ride problem (DARP) is an optimization problem which deals with the minimization of the cost of the provided service where the customers are provided a door-to-door service based on their requests. This optimization model presented in earlier studies, is considered in this study. Due to the non-linear nature of the objective function the traditional optimization methods are plagued with the problem of converging to a local minima. To overcome this pitfall we use metaheuristics namely Simulated Annealing (SA), Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Artificial Immune System (AIS). From the results obtained, we conclude that Artificial Immune System method effectively tackles this optimization problem by providing us with optimal solutions. Crown Copyright (C) 2011 Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method of network analysis, a generalization in several different senses of existing methods and applicable to all networks for which a branch-admittance (or impedance) matrix can be formed, is presented. The treatment of network determinants is very general and essentially four terminal rather than three terminal, and leads to simple expressions based on trees of a simple graph associated with the network and matrix, and involving products of low-order, usually(2 times 2)determinants of tree-branch admittances, in addition to tree-branch products as in existing methods. By comparison with existing methods, the total number of trees and of tree pairs is usually considerably reduced, and this fact, together with an easy method of tree-pair sign determination which is also presented, makes the new method simpler in general. The method can be very easily adapted, by the use of infinite parameters, to accommodate ideal transformers, operational amplifiers, and other forms of network constraint; in fact, is thought to be applicable to all linear networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper obtains a new accurate model for sensitivity in power systems and uses it in conjunction with linear programming for the solution of load-shedding problems with a minimum loss of loads. For cases where the error in the sensitivity model increases, other linear programming and quadratic programming models have been developed, assuming currents at load buses as variables and not load powers. A weighted error criterion has been used to take priority schedule into account; it can be either a linear or a quadratic function of the errors, and depending upon the function appropriate programming techniques are to be employed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper proposes a study of symmetrical and related components, based on the theory of linear vector spaces. Using the concept of equivalence, the transformation matrixes of Clarke, Kimbark, Concordia, Boyajian and Koga are shown to be column equivalent to Fortescue's symmetrical-component transformation matrix. With a constraint on power, criteria are presented for the choice of bases for voltage and current vector spaces. In particular, it is shown that, for power invariance, either the same orthonormal (self-reciprocal) basis must be chosen for both voltage and current vector spaces, or the basis of one must be chosen to be reciprocal to that of the other. The original �¿, ��, 0 components of Clarke are modified to achieve power invariance. For machine analysis, it is shown that invariant transformations lead to reciprocal mutual inductances between the equivalent circuits. The relative merits of the various components are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The symmetrized density matrix renormalization group method is used to study linear and nonlinear optical properties of free base porphine and metalloporphine. Long-range interacting model, namely, Pariser-Parr-Pople model is employed to capture the quantum many-body effect in these systems. The nonlinear optical coefficients are computed within the correction vector method. The computed singlet and triplet low-lying excited state energies and their charge densities are in excellent agreement with experimental as well as many other theoretical results. The rearrangement of the charge density at carbon and nitrogen sites, on excitation, is discussed. From our bond order calculation, we conclude that porphine is well described by the 18-annulenic structure in the ground state and the molecule expands upon excitation. We have modeled the regular metalloporphine by taking an effective electric field due to the metal ion and computed the excitation spectrum. Metalloporphines have D(4h) symmetry and hence have more degenerate excited states. The ground state of metalloporphines shows 20-annulenic structure, as the charge on the metal ion increases. The linear polarizability seems to increase with the charge initially and then saturates. The same trend is observed in third order polarizability coefficients. (C) 2012 American Institute of Physics. [doi: 10.1063/1.3671946]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Radius of Direct attraction of a discrete neural network is a measure of stability of the network. it is known that Hopfield networks designed using Hebb's Rule have a radius of direct attraction of Omega(n/p) where n is the size of the input patterns and p is the number of them. This lower bound is tight if p is no larger than 4. We construct a family of such networks with radius of direct attraction Omega(n/root plog p), for any p greater than or equal to 5. The techniques used to prove the result led us to the first polynomial-time algorithm for designing a neural network with maximum radius of direct attraction around arbitrary input patterns. The optimal synaptic matrix is computed using the ellipsoid method of linear programming in conjunction with an efficient separation oracle. Restrictions of symmetry and non-negative diagonal entries in the synaptic matrix can be accommodated within this scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffuse optical tomography (DOT) is one of the ways to probe highly scattering media such as tissue using low-energy near infra-red light (NIR) to reconstruct a map of the optical property distribution. The interaction of the photons in biological tissue is a non-linear process and the phton transport through the tissue is modelled using diffusion theory. The inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. The solution of the non-linear problem can be improved by modeling and optimizing the cost functional. The cost functional is f(x) = x(T)Ax - b(T)x + c and after minimization, the cost functional reduces to Ax = b. The spatial distribution of optical parameter can be obtained by solving the above equation iteratively for x. As the problem is non-linear, ill-posed and ill-conditioned, there will be an error or correction term for x at each iteration. A linearization strategy is proposed for the solution of the nonlinear ill-posed inverse problem by linear combination of system matrix and error in solution. By propagating the error (e) information (obtained from previous iteration) to the minimization function f(x), we can rewrite the minimization function as f(x; e) = (x + e)(T) A(x + e) - b(T)(x + e) + c. The revised cost functional is f(x; e) = f(x) + e(T)Ae. The self guided spatial weighted prior (e(T)Ae) error (e, error in estimating x) information along the principal nodes facilitates a well resolved dominant solution over the region of interest. The local minimization reduces the spreading of inclusion and removes the side lobes, thereby improving the contrast, localization and resolution of reconstructed image which has not been possible with conventional linear and regularization algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wave propagation in graphene sheet embedded in elastic medium (polymer matrix) has been a topic of great interest in nanomechanics of graphene sheets, where the equivalent continuum models are widely used. In this manuscript, we examined this issue by incorporating the nonlocal theory into the classical plate model. The influence of the nonlocal scale effects has been investigated in detail. The results are qualitatively different from those obtained based on the local/classical plate theory and thus, are important for the development of monolayer graphene-based nanodevices. In the present work, the graphene sheet is modeled as an isotropic plate of one-atom thick. The chemical bonds are assumed to be formed between the graphene sheet and the elastic medium. The polymer matrix is described by a Pasternak foundation model, which accounts for both normal pressure and the transverse shear deformation of the surrounding elastic medium. When the shear effects are neglected, the model reduces to Winkler foundation model. The normal pressure or Winkler elastic foundation parameter is approximated as a series of closely spaced, mutually independent, vertical linear elastic springs where the foundation modulus is assumed equivalent to stiffness of the springs. For this model, the nonlocal governing differential equations of motion are derived from the minimization of the total potential energy of the entire system. An ultrasonic type of flexural wave propagation model is also derived and the results of the wave dispersion analysis are shown for both local and nonlocal elasticity calculations. From this analysis we show that the elastic matrix highly affects the flexural wave mode and it rapidly increases the frequency band gap of flexural mode. The flexural wavenumbers obtained from nonlocal elasticity calculations are higher than the local elasticity calculations. The corresponding wave group speeds are smaller in nonlocal calculation as compared to local elasticity calculation. The effect of y-directional wavenumber (eta(q)) on the spectrum and dispersion relations of the graphene embedded in polymer matrix is also observed. We also show that the cut-off frequencies of flexural wave mode depends not only on the y-direction wavenumber but also on nonlocal scaling parameter (e(0)a). The effect of eta(q) and e(0)a on the cut-off frequency variation is also captured for the cases of with and without elastic matrix effect. For a given nanostructure, nonlocal small scale coefficient can be obtained by matching the results from molecular dynamics (MD) simulations and the nonlocal elasticity calculations. At that value of the nonlocal scale coefficient, the waves will propagate in the nanostructure at that cut-off frequency. In the present paper, different values of e(0)a are used. One can get the exact e(0)a for a given graphene sheet by matching the MD simulation results of graphene with the results presented in this article. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural Support Vector Machines (SSVMs) have become a popular tool in machine learning for predicting structured objects like parse trees, Part-of-Speech (POS) label sequences and image segments. Various efficient algorithmic techniques have been proposed for training SSVMs for large datasets. The typical SSVM formulation contains a regularizer term and a composite loss term. The loss term is usually composed of the Linear Maximum Error (LME) associated with the training examples. Other alternatives for the loss term are yet to be explored for SSVMs. We formulate a new SSVM with Linear Summed Error (LSE) loss term and propose efficient algorithms to train the new SSVM formulation using primal cutting-plane method and sequential dual coordinate descent method. Numerical experiments on benchmark datasets demonstrate that the sequential dual coordinate descent method is faster than the cutting-plane method and reaches the steady-state generalization performance faster. It is thus a useful alternative for training SSVMs when linear summed error is used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.