323 resultados para quadratic polynomial
Resumo:
The chemical-shift of the X-ray K-absorption edge of Co was studied in a large number of compounds, complexes (spinels) and minerals of Co in its different oxidation states having widely different crystal structures and containing different types of bonding and various types of ligands, and were reported collectively, for the first time, in a single paper. A quadratic relationship was established on the basis of least-squares regression analysis to hold between the chemical-shift and the effective charge on the absorbing atom, but the dominance of the linear term was shown. This relation was utilized in evaluating the charge on the Co-ion in a number of minerals. The effect on chemical-shift of oxidation states of the absorbing atom, of the bond length, crystal structure and higher shell atoms of the molecule, and of electronegativity, atomic number and ionic radius of the ligand was discussed.
Resumo:
The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.
Resumo:
A unified gauge theory of massless and massive spin-2 fields is of considerable current interest. The Poincaré gauge theories with quadratic Lagrangian are linearized, and the conditions on the parameters are found which will lead to viable linear theories with massive gauge particles. As well as the 2+ massless gravitons coming from the translational gauge potential, the rotational gauge potentials, in the linearized limit, give rise to 2+ and 2− particles of equal mass, as well as a massive pseudoscalar.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.
Resumo:
The unsteady laminar incompressible three-dimensional boundary layer flow and heat transfer on a flat plate with an attached cylinder have been studied when the free stream velocity components and wall temperature vary inversely as linear and quadratic functions of time, respectively. The governing semisimilar partial differential equations with three independent variables have been solved numerically using a quasilinear finite-difference scheme. The results indicate that the skin friction increases with parameter ? which characterizes the unsteadiness in the free stream velocity and the streamwise distance Image , but the heat transfer decreases. However, the skin friction and heat transfer are found to change little along Image . The effect of the Prandtl number on the heat transfer is found to be more pronounced when ? is small, whereas the effect of the dissipation parameter is more pronounced when ? is comparatively large.
Resumo:
The different formalisms for the representation of thermodynamic data on dilute multicomponent solutions are critically reviewed. The thermodynamic consistency of the formalisms are examined and the interrelations between them are highlighted. The options are constraints in the use of the interaction parameter and Darken's quadratic formalisms for multicomponent solutions are discussed in the light of the available experimental data. Truncatred Maclaurin series expansion is thermodynamically inconsistent unless special relations between interaction parameters are invoked. However, the lack of strict mathematical consistency does not affect the practical use of the formalism. Expressions for excess partial properties can be integrated along defined composition paths without significant loss of accuracy. Although thermodynamically consistent, the applicability of Darken's quadratic formalism to strongly interacting systems remains to be established by experiment.
Resumo:
Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Resumo:
A k-dimensional box is the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval oil the real line of the form a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-cubes. The threshold dimension of a graph G(V, E) is the smallest integer k such that E can be covered by k threshold spanning subgraphs of G. In this paper we will show that there exists no polynomial-time algorithm for approximating the threshold dimension of a graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. From this result we will show that there exists no polynomial-time algorithm for approximating the boxicity and the cubicity of a graph on n vertices with factor O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. In fact all these hardness results hold even for a highly structured class of graphs, namely the split graphs. We will also show that it is NP-complete to determine whether a given split graph has boxicity at most 3. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.
Resumo:
This paper proposes a differential evolution based method of improving the performance of conventional guidance laws at high heading errors, without resorting to techniques from optimal control theory, which are complicated and suffer from several limitations. The basic guidance law is augmented with a term that is a polynomial function of the heading error. The values of the coefficients of the polynomial are found by applying the differential evolution algorithm. The results are compared with the basic guidance law, and the all-aspect proportional navigation laws in the literature. A scheme for online implementation of the proposed law for application in practice is also given. (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Using normal mode analysis Rayleigh-Taylor instability is investigated for three-layer viscous stratified incompressible steady flow, when the top 3rd and bottom 1st layers extend up to infinity, the middle layer has a small thickness δ. The wave Reynolds number in the middle layer is assumed to be sufficiently small. A dispersion relation (a seventh degree polynomial in wave frequency ω) valid up to the order of the maximal value of all possible Kj (j less-than-or-equals, slant 0, K is the wave number) in each coefficient of the polynomial is obtained. A sufficient condition for instability is found out for the first time, pursuing a medium wavelength analysis. It depends on ratios (α and β) of the coefficients of viscosity, the thickness of the middle layer δ, surface tension ratio T and wave number K. This is a new analytical criterion for Rayleigh-Taylor instability of three-layer fluids. It recovers the results of the corresponding problem for two-layer fluids. Among the results obtained, it is observed that taking the coefficients of viscosity of 2nd and 3rd layers same can inhibit the effect of surface tension completely. For large wave number K, the thickness of the middle layer should be correspondingly small to keep the domain of dependence of the threshold wave number Kc constant for fixed α, β and T.
Resumo:
The curvature of the line of critical points in a reentrant ternary mixture is determined by approaching the double critical point (DCP) extremely closely. The results establish the continuous and quadratic nature of this line. Out study encompasses as small a loop size (ΔT) as 663 mK. The DCP is realized when ΔT becomes zero.
Resumo:
In this paper, we present an algebraic method to study and design spatial parallel manipulators that demonstrate isotropy in the force and moment distributions. We use the force and moment transformation matrices separately, and derive conditions for their isotropy individually as well as in combination. The isotropy conditions are derived in closed-form in terms of the invariants of the quadratic forms associated with these matrices. The formulation is applied to a class of Stewart platform manipulator, and a multi-parameter family of isotropic manipulators is identified analytically. We show that it is impossible to obtain a spatially isotropic configuration within this family. We also compute the isotropic configurations of an existing manipulator and demonstrate a procedure for designing the manipulator for isotropy at a given configuration.
Resumo:
In this paper we describe a method for the optimum design of fiber rein forced composite laminates for strength by ranking. The software developed based on this method is capable of designing laminates for strength; which are subjected to inplane and/or bending loads and optionally hygrothermal loads. Symmetric laminates only are considered which are assumed to be made of repeated sublaminate construction. Various layup schemes are evaluated based on the laminated plate theory and quadratic failure cri terion for the given mechanical and hygrothermal loads. The optimum layup sequence in the sublaminate and the number of such sublaminates required are obtained. Further, a ply-drop round-off scheme is adopted to arrive at an optimum laminate thickness. As an example, a family of 0/90/45/ -45 bi-directional lamination schemes are examined for dif ferent types of loads and the gains in optimising the ply orientations in a sublaminate are demonstrated.
Resumo:
The element-based piecewise smooth functional approximation in the conventional finite element method (FEM) results in discontinuous first and higher order derivatives across element boundaries Despite the significant advantages of the FEM in modelling complicated geometries, a motivation in developing mesh-free methods has been the ease with which higher order globally smooth shape functions can be derived via the reproduction of polynomials There is thus a case for combining these advantages in a so-called hybrid scheme or a `smooth FEM' that, whilst retaining the popular mesh-based discretization, obtains shape functions with uniform C-p (p >= 1) continuity One such recent attempt, a NURBS based parametric bridging method (Shaw et al 2008b), uses polynomial reproducing, tensor-product non-uniform rational B-splines (NURBS) over a typical FE mesh and relies upon a (possibly piecewise) bijective geometric map between the physical domain and a rectangular (cuboidal) parametric domain The present work aims at a significant extension and improvement of this concept by replacing NURBS with DMS-splines (say, of degree n > 0) that are defined over triangles and provide Cn-1 continuity across the triangle edges This relieves the need for a geometric map that could precipitate ill-conditioning of the discretized equations Delaunay triangulation is used to discretize the physical domain and shape functions are constructed via the polynomial reproduction condition, which quite remarkably relieves the solution of its sensitive dependence on the selected knotsets Derivatives of shape functions are also constructed based on the principle of reproduction of derivatives of polynomials (Shaw and Roy 2008a) Within the present scheme, the triangles also serve as background integration cells in weak formulations thereby overcoming non-conformability issues Numerical examples involving the evaluation of derivatives of targeted functions up to the fourth order and applications of the method to a few boundary value problems of general interest in solid mechanics over (non-simply connected) bounded domains in 2D are presented towards the end of the paper