956 resultados para quadratic polynomial
Resumo:
The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.
Resumo:
A unified gauge theory of massless and massive spin-2 fields is of considerable current interest. The Poincaré gauge theories with quadratic Lagrangian are linearized, and the conditions on the parameters are found which will lead to viable linear theories with massive gauge particles. As well as the 2+ massless gravitons coming from the translational gauge potential, the rotational gauge potentials, in the linearized limit, give rise to 2+ and 2− particles of equal mass, as well as a massive pseudoscalar.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.
Resumo:
The unsteady laminar incompressible three-dimensional boundary layer flow and heat transfer on a flat plate with an attached cylinder have been studied when the free stream velocity components and wall temperature vary inversely as linear and quadratic functions of time, respectively. The governing semisimilar partial differential equations with three independent variables have been solved numerically using a quasilinear finite-difference scheme. The results indicate that the skin friction increases with parameter ? which characterizes the unsteadiness in the free stream velocity and the streamwise distance Image , but the heat transfer decreases. However, the skin friction and heat transfer are found to change little along Image . The effect of the Prandtl number on the heat transfer is found to be more pronounced when ? is small, whereas the effect of the dissipation parameter is more pronounced when ? is comparatively large.
Resumo:
The different formalisms for the representation of thermodynamic data on dilute multicomponent solutions are critically reviewed. The thermodynamic consistency of the formalisms are examined and the interrelations between them are highlighted. The options are constraints in the use of the interaction parameter and Darken's quadratic formalisms for multicomponent solutions are discussed in the light of the available experimental data. Truncatred Maclaurin series expansion is thermodynamically inconsistent unless special relations between interaction parameters are invoked. However, the lack of strict mathematical consistency does not affect the practical use of the formalism. Expressions for excess partial properties can be integrated along defined composition paths without significant loss of accuracy. Although thermodynamically consistent, the applicability of Darken's quadratic formalism to strongly interacting systems remains to be established by experiment.
Resumo:
Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Resumo:
A k-dimensional box is the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval oil the real line of the form a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-cubes. The threshold dimension of a graph G(V, E) is the smallest integer k such that E can be covered by k threshold spanning subgraphs of G. In this paper we will show that there exists no polynomial-time algorithm for approximating the threshold dimension of a graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. From this result we will show that there exists no polynomial-time algorithm for approximating the boxicity and the cubicity of a graph on n vertices with factor O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. In fact all these hardness results hold even for a highly structured class of graphs, namely the split graphs. We will also show that it is NP-complete to determine whether a given split graph has boxicity at most 3. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.
Resumo:
This paper proposes a differential evolution based method of improving the performance of conventional guidance laws at high heading errors, without resorting to techniques from optimal control theory, which are complicated and suffer from several limitations. The basic guidance law is augmented with a term that is a polynomial function of the heading error. The values of the coefficients of the polynomial are found by applying the differential evolution algorithm. The results are compared with the basic guidance law, and the all-aspect proportional navigation laws in the literature. A scheme for online implementation of the proposed law for application in practice is also given. (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
Resumo:
Using normal mode analysis Rayleigh-Taylor instability is investigated for three-layer viscous stratified incompressible steady flow, when the top 3rd and bottom 1st layers extend up to infinity, the middle layer has a small thickness δ. The wave Reynolds number in the middle layer is assumed to be sufficiently small. A dispersion relation (a seventh degree polynomial in wave frequency ω) valid up to the order of the maximal value of all possible Kj (j less-than-or-equals, slant 0, K is the wave number) in each coefficient of the polynomial is obtained. A sufficient condition for instability is found out for the first time, pursuing a medium wavelength analysis. It depends on ratios (α and β) of the coefficients of viscosity, the thickness of the middle layer δ, surface tension ratio T and wave number K. This is a new analytical criterion for Rayleigh-Taylor instability of three-layer fluids. It recovers the results of the corresponding problem for two-layer fluids. Among the results obtained, it is observed that taking the coefficients of viscosity of 2nd and 3rd layers same can inhibit the effect of surface tension completely. For large wave number K, the thickness of the middle layer should be correspondingly small to keep the domain of dependence of the threshold wave number Kc constant for fixed α, β and T.
Resumo:
Modern elementary particle physics is based on quantum field theories. Currently, our understanding is that, on the one hand, the smallest structures of matter and, on the other hand, the composition of the universe are based on quantum field theories which present the observable phenomena by describing particles as vibrations of the fields. The Standard Model of particle physics is a quantum field theory describing the electromagnetic, weak, and strong interactions in terms of a gauge field theory. However, it is believed that the Standard Model describes physics properly only up to a certain energy scale. This scale cannot be much larger than the so-called electroweak scale, i.e., the masses of the gauge fields W^+- and Z^0. Beyond this scale, the Standard Model has to be modified. In this dissertation, supersymmetric theories are used to tackle the problems of the Standard Model. For example, the quadratic divergences, which plague the Higgs boson mass in the Standard model, cancel in supersymmetric theories. Experimental facts concerning the neutrino sector indicate that the lepton number is violated in Nature. On the other hand, the lepton number violating Majorana neutrino masses can induce sneutrino-antisneutrino oscillations in any supersymmetric model. In this dissertation, I present some viable signals for detecting the sneutrino-antisneutrino oscillation at colliders. At the e-gamma collider (at the International Linear Collider), the numbers of the electron-sneutrino-antisneutrino oscillation signal events are quite high, and the backgrounds are quite small. A similar study for the LHC shows that, even though there are several backrounds, the sneutrino-antisneutrino oscillations can be detected. A useful asymmetry observable is introduced and studied. Usually, the oscillation probability formula where the sneutrinos are produced at rest is used. However, here, we study a general oscillation probability. The Lorentz factor and the distance at which the measurement is made inside the detector can have effects, especially when the sneutrino decay width is very small. These effects are demonstrated for a certain scenario at the LHC.
Resumo:
The curvature of the line of critical points in a reentrant ternary mixture is determined by approaching the double critical point (DCP) extremely closely. The results establish the continuous and quadratic nature of this line. Out study encompasses as small a loop size (ΔT) as 663 mK. The DCP is realized when ΔT becomes zero.
Resumo:
In this paper, we present an algebraic method to study and design spatial parallel manipulators that demonstrate isotropy in the force and moment distributions. We use the force and moment transformation matrices separately, and derive conditions for their isotropy individually as well as in combination. The isotropy conditions are derived in closed-form in terms of the invariants of the quadratic forms associated with these matrices. The formulation is applied to a class of Stewart platform manipulator, and a multi-parameter family of isotropic manipulators is identified analytically. We show that it is impossible to obtain a spatially isotropic configuration within this family. We also compute the isotropic configurations of an existing manipulator and demonstrate a procedure for designing the manipulator for isotropy at a given configuration.
Resumo:
In this paper we describe a method for the optimum design of fiber rein forced composite laminates for strength by ranking. The software developed based on this method is capable of designing laminates for strength; which are subjected to inplane and/or bending loads and optionally hygrothermal loads. Symmetric laminates only are considered which are assumed to be made of repeated sublaminate construction. Various layup schemes are evaluated based on the laminated plate theory and quadratic failure cri terion for the given mechanical and hygrothermal loads. The optimum layup sequence in the sublaminate and the number of such sublaminates required are obtained. Further, a ply-drop round-off scheme is adopted to arrive at an optimum laminate thickness. As an example, a family of 0/90/45/ -45 bi-directional lamination schemes are examined for dif ferent types of loads and the gains in optimising the ply orientations in a sublaminate are demonstrated.