963 resultados para Algebraic Polynomials
Resumo:
We report calculations using a reaction surface Hamiltonian for which the vibrations of a molecule are represented by 3N-8 normal coordinates, Q, and two large amplitude motions, s(1) and s(2). The exact form of the kinetic energy operator is derived in these coordinates. The potential surface is first represented as a quadratic in Q, the coefficients of which depend upon the values of s(1),s(2) and then extended to include up to Q(6) diagonal anharmonic terms. The vibrational energy levels are evaluated by solving the variational secular equations, using a basis of products of Hermite polynomials and appropriate functions of s(1),s(2). Our selected example is malonaldehyde (N=9) and we choose as surface parameters two OH distances of the migrating H in the internal hydrogen transfer. The reaction surface Hamiltonian is ideally suited to the study of the kind of tunneling dynamics present in malonaldehyde. Our results are in good agreement with previous calculations of the zero point tunneling splitting and in general agreement with observed data. Interpretation of our two-dimensional reaction surface states suggests that the OH stretching fundamental is incorrectly assigned in the infrared spectrum. This mode appears at a much lower frequency in our calculations due to substantial transition state character. (c) 2006 American Institute of Physics.
Resumo:
The implications of whether new surfaces in cutting are formed just by plastic flow past the tool or by some fracturelike separation process involving significant surface work, are discussed. Oblique metalcutting is investigated using the ideas contained in a new algebraic model for the orthogonal machining of metals (Atkins, A. G., 2003, "Modeling Metalcutting Using Modern Ductile Fracture Mechanics: Quantitative Explanations for Some Longstanding Problems," Int. J. Mech. Sci., 45, pp. 373–396) in which significant surface work (ductile fracture toughnesses) is incorporated. The model is able to predict explicit material-dependent primary shear plane angles and provides explanations for a variety of well-known effects in cutting, such as the reduction of at small uncut chip thicknesses; the quasilinear plots of cutting force versus depth of cut; the existence of a positive force intercept in such plots; why, in the size-effect regime of machining, anomalously high values of yield stress are determined; and why finite element method simulations of cutting have to employ a "separation criterion" at the tool tip. Predictions from the new analysis for oblique cutting (including an investigation of Stabler's rule for the relation between the chip flow velocity angle C and the angle of blade inclination i) compare consistently and favorably with experimental results.
Resumo:
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Resumo:
Transreal arithmetic is a total arithmetic that contains real arithmetic, but which has no arithmetical exceptions. It allows the specification of the Universal Perspex Machine which unifies geometry with the Turing Machine. Here we axiomatise the algebraic structure of transreal arithmetic so that it provides a total arithmetic on any appropriate set of numbers. This opens up the possibility of specifying a version of floating-point arithmetic that does not have any arithmetical exceptions and in which every number is a first-class citizen. We find that literal numbers in the axioms are distinct. In other words, the axiomatisation does not require special axioms to force non-triviality. It follows that transreal arithmetic must be defined on a set of numbers that contains{-8,-1,0,1,8,&pphi;} as a proper subset. We note that the axioms have been shown to be consistent by machine proof.
Resumo:
Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this paper we introduce a new algorithm, based on the successful work of Fathi and Alexandrov, on hybrid Monte Carlo algorithms for matrix inversion and solving systems of linear algebraic equations. This algorithm consists of two parts, approximate inversion by Monte Carlo and iterative refinement using a deterministic method. Here we present a parallel hybrid Monte Carlo algorithm, which uses Monte Carlo to generate an approximate inverse and that improves the accuracy of the inverse with an iterative refinement. The new algorithm is applied efficiently to sparse non-singular matrices. When we are solving a system of linear algebraic equations, Bx = b, the inverse matrix is used to compute the solution vector x = B(-1)b. We present results that show the efficiency of the parallel hybrid Monte Carlo algorithm in the case of sparse matrices.
Resumo:
In this work we study the computational complexity of a class of grid Monte Carlo algorithms for integral equations. The idea of the algorithms consists in an approximation of the integral equation by a system of algebraic equations. Then the Markov chain iterative Monte Carlo is used to solve the system. The assumption here is that the corresponding Neumann series for the iterative matrix does not necessarily converge or converges slowly. We use a special technique to accelerate the convergence. An estimate of the computational complexity of Monte Carlo algorithm using the considered approach is obtained. The estimate of the complexity is compared with the corresponding quantity for the complexity of the grid-free Monte Carlo algorithm. The conditions under which the class of grid Monte Carlo algorithms is more efficient are given.
Resumo:
We consider scattering of a time harmonic incident plane wave by a convex polygon with piecewise constant impedance boundary conditions. Standard finite or boundary element methods require the number of degrees of freedom to grow at least linearly with respect to the frequency of the incident wave in order to maintain accuracy. Extending earlier work by Chandler-Wilde and Langdon for the sound soft problem, we propose a novel Galerkin boundary element method, with the approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh with smaller elements closer to the corners of the polygon. Theoretical analysis and numerical results suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency of the incident wave.
Resumo:
We consider the scattering of a time-harmonic acoustic incident plane wave by a sound soft convex curvilinear polygon with Lipschitz boundary. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the number of degrees of freedom required to achieve a prescribed level of accuracy grows at least linearly with respect to the frequency of the incident wave. Here we propose a novel Galerkin boundary element method with a hybrid approximation space, consisting of the products of plane wave basis functions with piecewise polynomials supported on several overlapping meshes; a uniform mesh on illuminated sides, and graded meshes refined towards the corners of the polygon on illuminated and shadow sides. Numerical experiments suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy need only grow logarithmically as the frequency of the incident wave increases.
Resumo:
A simple parameter adaptive controller design methodology is introduced in which steady-state servo tracking properties provide the major control objective. This is achieved without cancellation of process zeros and hence the underlying design can be applied to non-minimum phase systems. As with other self-tuning algorithms, the design (user specified) polynomials of the proposed algorithm define the performance capabilities of the resulting controller. However, with the appropriate definition of these polynomials, the synthesis technique can be shown to admit different adaptive control strategies, e.g. self-tuning PID and self-tuning pole-placement controllers. The algorithm can therefore be thought of as an embodiment of other self-tuning design techniques. The performances of some of the resulting controllers are illustrated using simulation examples and the on-line application to an experimental apparatus.
Resumo:
A polynomial-based ARMA model, when posed in a state-space framework can be regarded in many different ways. In this paper two particular state-space forms of the ARMA model are considered, and although both are canonical in structure they differ in respect of the mode in which disturbances are fed into the state and output equations. For both forms a solution is found to the optimal discrete-time observer problem and algebraic connections between the two optimal observers are shown. The purpose of the paper is to highlight the fact that the optimal observer obtained from the first state-space form, commonly known as the innovations form, is not that employed in an optimal controller, in the minimum-output variance sense, whereas the optimal observer obtained from the second form is. Hence the second form is a much more appropriate state-space description to use for controller design, particularly when employed in self-tuning control schemes.
Resumo:
The problem of identification of a nonlinear dynamic system is considered. A two-layer neural network is used for the solution of the problem. Systems disturbed with unmeasurable noise are considered, although it is known that the disturbance is a random piecewise polynomial process. Absorption polynomials and nonquadratic loss functions are used to reduce the effect of this disturbance on the estimates of the optimal memory of the neural-network model.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
This paper introduces perspex algebra which is being developed as a common representation of geometrical knowledge. A perspex can currently be interpreted in one of four ways. First, the algebraic perspex is a generalization of matrices, it provides the most general representation for all of the interpretations of a perspex. The algebraic perspex can be used to describe arbitrary sets of coordinates. The remaining three interpretations of the perspex are all related to square matrices and operate in a Euclidean model of projective space-time, called perspex space. Perspex space differs from the usual Euclidean model of projective space in that it contains the point at nullity. It is argued that the point at nullity is necessary for a consistent account of perspective in top-down vision. Second, the geometric perspex is a simplex in perspex space. It can be used as a primitive building block for shapes, or as a way of recording landmarks on shapes. Third, the transformational perspex describes linear transformations in perspex space that provide the affine and perspective transformations in space-time. It can be used to match a prototype shape to an image, even in so called 'accidental' views where the depth of an object disappears from view, or an object stays in the same place across time. Fourth, the parametric perspex describes the geometric and transformational perspexes in terms of parameters that are related to everyday English descriptions. The parametric perspex can be used to obtain both continuous and categorical perception of objects. The paper ends with a discussion of issues related to using a perspex to describe logic.
Resumo:
Smooth trajectories are essential for safe interaction in between human and a haptic interface. Different methods and strategies have been introduced to create such smooth trajectories. This paper studies the creation of human-like movements in haptic interfaces, based on the study of human arm motion. These motions are intended to retrain the upper limb movements of patients that lose manipulation functions following stroke. We present a model that uses higher degree polynomials to define a trajectory and control the robot arm to achieve minimum jerk movements. It also studies different methods that can be driven from polynomials to create more realistic human-like movements for therapeutic purposes.