272 resultados para problem complexity
Resumo:
In this paper, we deal with low-complexity near-optimal detection/equalization in large-dimension multiple-input multiple-output inter-symbol interference (MIMO-ISI) channels using message passing on graphical models. A key contribution in the paper is the demonstration that near-optimal performance in MIMO-ISI channels with large dimensions can be achieved at low complexities through simple yet effective simplifications/approximations, although the graphical models that represent MIMO-ISI channels are fully/densely connected (loopy graphs). These include 1) use of Markov random field (MRF)-based graphical model with pairwise interaction, in conjunction with message damping, and 2) use of factor graph (FG)-based graphical model with Gaussian approximation of interference (GAI). The per-symbol complexities are O(K(2)n(t)(2)) and O(Kn(t)) for the MRF and the FG with GAI approaches, respectively, where K and n(t) denote the number of channel uses per frame, and number of transmit antennas, respectively. These low-complexities are quite attractive for large dimensions, i.e., for large Kn(t). From a performance perspective, these algorithms are even more interesting in large-dimensions since they achieve increasingly closer to optimum detection performance for increasing Kn(t). Also, we show that these message passing algorithms can be used in an iterative manner with local neighborhood search algorithms to improve the reliability/performance of M-QAM symbol detection.
Resumo:
The pursuit-evasion problem of two aircraft in a horizontal plane is modelled as a zerosum differential game with capture time as payoff. The aircraft are modelled as point masses with thrust and bank angle controls. The games of kind and degree for this differential game are solved.
Resumo:
The diversity order and coding gain are crucial for the performance of a multiple antenna communication system. It is known that space-time trellis codes (STTC) can be used to achieve these objectives. In particular, we can use STTCs to obtain large coding gains. Many attempts have been made to construct STTCs which achieve full-diversity and good coding gains, though a general method of construction does not exist. Delay diversity code (rate-1) is known to achieve full-diversity, for any number of transmit antennas and any signal set, but does not give a good coding gain. A product distance code based delay diversity scheme (Tarokh, V. et al., IEEE Trans. Inform. Theory, vol.44, p.744-65, 1998) enables one to improve the coding gain and construct STTCs for any given number of states using coding in conjunction with delay diversity; it was stated as an open problem. We achieve such a construction. We assume a shift register based model to construct an STTC for any state complexity. We derive a sufficient condition for this STTC to achieve full-diversity, based on the delay diversity scheme. This condition provides a framework to do coding in conjunction with delay diversity for any signal constellation. Using this condition, we provide a formal rate-1 STTC construction scheme for PSK signal sets, for any number of transmit antennas and any given number of states, which achieves full-diversity and gives a good coding gain.
Resumo:
In this paper, we give a new framework for constructing low ML decoding complexity space-time block codes (STBCs) using codes over the Klein group K. Almost all known low ML decoding complexity STBCs can be obtained via this approach. New full- diversity STBCs with low ML decoding complexity and cubic shaping property are constructed, via codes over K, for number of transmit antennas N = 2(m), m >= 1, and rates R > 1 complex symbols per channel use. When R = N, the new STBCs are information- lossless as well. The new class of STBCs have the least knownML decoding complexity among all the codes available in the literature for a large set of (N, R) pairs.
Resumo:
An analytical analysis of ferroresonance with possible cases of its occurrence in series-and shunt-compensated systems is presented. A term `percentage unstable zoneÿ is defined to compare the jump severity of different nonlinearities. A direct analytical method has been shown to yield complete information. An attempt has been made to find all four critical points: jump-from and jump-to points of ferroresonance jump phenomena. The systems considered for analysis are typical 500 kV transmission systems of various lengths.
Resumo:
This report describes some preliminary experiments on the use of the relaxation technique for the reconstruction of the elements of a matrix given their various directional sums (or projections).
Resumo:
One of the significant advancements in Nuclear Magnetic Resonance spectroscopy (NMR) in combating the problem of spectral complexity for deriving the structure and conformational information is the incorporation of additional dimension and to spread the information content in a two dimensional space. This approach together with the manipulation of the dynamics of nuclear spins permitted the designing of appropriate pulse sequences leading to the evolution of diverse multidimensional NMR experiments. The desired spectral information can now be extracted in a simplified and an orchestrated manner. The indirect detection of multiple quantum (MQ) NMR frequencies is a step in this direction. The MQ technique has been extensively used in the study of molecules aligned in liquid crystalline media to reduce spectral complexity and to determine molecular geometries. Unlike in dipolar coupled systems, the size of the network of scalar coupled spins is not big in isotropic solutions and the MQ 1H detection is not routinely employed,although there are specific examples of spin topology filtering. In this brief review, we discuss our recent studies on the development and application of multiple quantum correlation and resolved techniques for the analyses of proton NMR spectra of scalar coupled spins.
Resumo:
Considering the linearized boundary layer equations for three-dimensional disturbances, a Mangler type transformation is used to reduce this case to an equivalent two-dimensional one.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
This article is concerned with subsurface material identification for the 2-D Helmholtz equation. The algorithm is iterative in nature. It assumes an initial guess for the unknown function and obtains corrections to the guessed value. It linearizes the otherwise nonlinear problem around the background field. The background field is the field variable generated using the guessed value of the unknown function at each iteration. Numerical results indicate that the algorithm can recover a close estimate of the unknown function based on the measurements collected at the boundary.
Resumo:
In this paper, we investigate a numerical method for the solution of an inverse problem of recovering lacking data on some part of the boundary of a domain from the Cauchy data on other part for a variable coefficient elliptic Cauchy problem. In the process, the Cauchy problem is transformed into the problem of solving a compact linear operator equation. As a remedy to the ill-posedness of the problem, we use a projection method which allows regularization solely by discretization. The discretization level plays the role of regularization parameter in the case of projection method. The balancing principle is used for the choice of an appropriate discretization level. Several numerical examples show that the method produces a stable good approximate solution.
Resumo:
The nonlocal term in the nonlinear equations of Kirchhoff type causes difficulties when the equation is solved numerically by using the Newton-Raphson method. This is because the Jacobian of the Newton-Raphson method is full. In this article, the finite element system is replaced by an equivalent system for which the Jacobian is sparse. We derive quasi-optimal error estimates for the finite element method and demonstrate the results with numerical experiments.
Resumo:
The repeated or closely spaced eigenvalues and corresponding eigenvectors of a matrix are usually very sensitive to a perturbation of the matrix, which makes capturing the behavior of these eigenpairs very difficult. Similar difficulty is encountered in solving the random eigenvalue problem when a matrix with random elements has a set of clustered eigenvalues in its mean. In addition, the methods to solve the random eigenvalue problem often differ in characterizing the problem, which leads to different interpretations of the solution. Thus, the solutions obtained from different methods become mathematically incomparable. These two issues, the difficulty of solving and the non-unique characterization, are addressed here. A different approach is used where instead of tracking a few individual eigenpairs, the corresponding invariant subspace is tracked. The spectral stochastic finite element method is used for analysis, where the polynomial chaos expansion is used to represent the random eigenvalues and eigenvectors. However, the main concept of tracking the invariant subspace remains mostly independent of any such representation. The approach is successfully implemented in response prediction of a system with repeated natural frequencies. It is found that tracking only an invariant subspace could be sufficient to build a modal-based reduced-order model of the system. Copyright (C) 2012 John Wiley & Sons, Ltd.