958 resultados para Extremal polynomial ultraspherical polynomials
Resumo:
The growth rates of the hydrodynamic modes in the homogeneous sheared state of a granular material are determined by solving the Boltzmann equation. The steady velocity distribution is considered to be the product of the Maxwell Boltzmann distribution and a Hermite polynomial expansion in the velocity components; this form is inserted into them Boltzmann equation and solved to obtain the coeificients of the terms in the expansion. The solution is obtained using an expansion in the parameter epsilon =(1 - e)(1/2), and terms correct to epsilon(4) are retained to obtain an approximate solution; the error due to the neglect of higher terms is estimated at about 5% for e = 0.7. A small perturbation is placed on the distribution function in the form of a Hermite polynomial expansion for the velocity variations and a Fourier expansion in the spatial coordinates: this is inserted into the Boltzmann equation and the growth rate of the Fourier modes is determined. It is found that in the hydrodynamic limit, the growth rates of the hydrodynamic modes in the flow direction have unusual characteristics. The growth rate of the momentum diffusion mode is positive, indicating that density variations are unstable in the limit k--> 0, and the growth rate increases proportional to kslash} k kslash}(2/3) in the limit k --> 0 (in contrast to the k(2) increase in elastic systems), where k is the wave vector in the flow direction. The real and imaginary parts of the growth rate corresponding to the propagating also increase proportional to kslash k kslash(2/3) (in contrast to the k(2) and k increase in elastic systems). The energy mode is damped due to inelastic collisions between particles. The scaling of the growth rates of the hydrodynamic modes with the wave vector I in the gradient direction is similar to that in elastic systems. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Tanner Graph representation of linear block codes is widely used by iterative decoding algorithms for recovering data transmitted across a noisy communication channel from errors and erasures introduced by the channel. The stopping distance of a Tanner graph T for a binary linear block code C determines the number of erasures correctable using iterative decoding on the Tanner graph T when data is transmitted across a binary erasure channel using the code C. We show that the problem of finding the stopping distance of a Tanner graph is hard to approximate within any positive constant approximation ratio in polynomial time unless P = NP. It is also shown as a consequence that there can be no approximation algorithm for the problem achieving an approximation ratio of 2(log n)(1-epsilon) for any epsilon > 0 unless NP subset of DTIME(n(poly(log n))).
Resumo:
The physical design of a VLSI circuit involves circuit partitioning as a subtask. Typically, it is necessary to partition a large electrical circuit into several smaller circuits such that the total cross-wiring is minimized. This problem is a variant of the more general graph partitioning problem, and it is known that there does not exist a polynomial time algorithm to obtain an optimal partition. The heuristic procedure proposed by Kernighan and Lin1,2 requires O(n2 log2n) time to obtain a near-optimal two-way partition of a circuit with n modules. In the VLSI context, due to the large problem size involved, this computational requirement is unacceptably high. This paper is concerned with the hardware acceleration of the Kernighan-Lin procedure on an SIMD architecture. The proposed parallel partitioning algorithm requires O(n) processors, and has a time complexity of O(n log2n). In the proposed scheme, the reduced array architecture is employed with due considerations towards cost effectiveness and VLSI realizability of the architecture.The authors are not aware of any earlier attempts to parallelize a circuit partitioning algorithm in general or the Kernighan-Lin algorithm in particular. The use of the reduced array architecture is novel and opens up the possibilities of using this computing structure for several other applications in electronic design automation.
Resumo:
The max-coloring problem is to compute a legal coloring of the vertices of a graph G = (V, E) with a non-negative weight function w on V such that Sigma(k)(i=1) max(v epsilon Ci) w(v(i)) is minimized, where C-1, ... , C-k are the various color classes. Max-coloring general graphs is as hard as the classical vertex coloring problem, a special case where vertices have unit weight. In fact, in some cases it can even be harder: for example, no polynomial time algorithm is known for max-coloring trees. In this paper we consider the problem of max-coloring paths and its generalization, max-coloring abroad class of trees and show it can be solved in time O(vertical bar V vertical bar+time for sorting the vertex weights). When vertex weights belong to R, we show a matching lower bound of Omega(vertical bar V vertical bar log vertical bar V vertical bar) in the algebraic computation tree model.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.
Resumo:
Modern wireline and wireless communication devices are multimode and multifunctional communication devices. In order to support multiple standards on a single platform, it is necessary to develop a reconfigurable architecture that can provide the required flexibility and performance. The Channel decoder is one of the most compute intensive and essential elements of any communication system. Most of the standards require a reconfigurable Channel decoder that is capable of performing Viterbi decoding and Turbo decoding. Furthermore, the Channel decoder needs to support different configurations of Viterbi and Turbo decoders. In this paper, we propose a reconfigurable Channel decoder that can be reconfigured for standards such as WCDMA, CDMA2000, IEEE802.11, DAB, DVB and GSM. Different parameters like code rate, constraint length, polynomials and truncation length can be configured to map any of the above mentioned standards. A multiprocessor approach has been followed to provide higher throughput and scalable power consumption in various configurations of the reconfigurable Viterbi decoder and Turbo decoder. We have proposed A Hybrid register exchange approach for multiprocessor architecture to minimize power consumption.
Resumo:
High-speed evaluation of a large number of linear, quadratic, and cubic expressions is very important for the modeling and real-time display of objects in computer graphics. Using VLSI techniques, chips called pixel planes have actually been built by H. Fuchs and his group to evaluate linear expressions. In this paper, we describe a topological variant of Fuchs' pixel planes which can evaluate linear, quadratic, cubic, and higher-order polynomials. In our design, we make use of local interconnections only, i.e., interconnections between neighboring processing cells. This leads to the concept of tiling the processing cells for VLSI implementation.
Resumo:
Bose-C-Hocquenghem (BCH) atdes with symbols from an arbitrary fhite integer ring are derived in terms of their generator polynomials. The derivation is based on the factohation of x to the power (n) - 1 over the unit ring of an appropriate extension of the fiite integer ring. lke eomtruetion is thus shown to be similar to that for BCH codes over fink flelda.
Resumo:
This report contains the details of the development of the stiffness matrix for a rectangular laminated anisotropic shallow thin shell finite element. The derivation is done under linear thin shell assumptions. Expressing the assumed displacement state over the middle surface of the shell as products of one-dimensional first-order Hermite interpolation polynomials, it is possible to insure that the displacement state for the assembled set of such elements, to be geometrically admissible. Monotonic convergence of the total potential energy is therefore possible as the modelling is successively refined. The element is systematically evaluated for its performance considering various examples for which analytical or other solutions are available
Resumo:
The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.
Resumo:
Stability analysis is carried out considering free lateral vibrations of simply supported composite skew plates that are subjected to both direct and shear in-plane forces. An oblique stress component representation is used, consistent with the skew-geometry of the plate. A double series, expressed in Chebyshev polynomials, is used here as the assumed deflection surface and Ritz method of solution is employed. Numerical results for different combinations of side ratios, skew angle, and in-plane loadings that act individually or in combination are obtained. In this method, the in-plane load parameter is varied until the fundamental frequency goes to zero. The value of the in-plane load then corresponds to a critical buckling load. Plots of frequency parameter versus in-plane loading are given for a few typical cases. Details of crossings and quasi degeneracies of these curves are presented.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.
Resumo:
Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Resumo:
A k-dimensional box is the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval oil the real line of the form a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-cubes. The threshold dimension of a graph G(V, E) is the smallest integer k such that E can be covered by k threshold spanning subgraphs of G. In this paper we will show that there exists no polynomial-time algorithm for approximating the threshold dimension of a graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. From this result we will show that there exists no polynomial-time algorithm for approximating the boxicity and the cubicity of a graph on n vertices with factor O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. In fact all these hardness results hold even for a highly structured class of graphs, namely the split graphs. We will also show that it is NP-complete to determine whether a given split graph has boxicity at most 3. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.