962 resultados para Algebraic ANRs
Resumo:
This note is concerned with the problem of determining approximate solutions of Fredholm integral equations of the second kind. Approximating the solution of a given integral equation by means of a polynomial, an over-determined system of linear algebraic equations is obtained involving the unknown coefficients, which is finally solved by using the least-squares method. Several examples are examined in detail. (c) 2009 Elsevier Inc. All rights reserved.
Resumo:
Recovering the motion of a non-rigid body from a set of monocular images permits the analysis of dynamic scenes in uncontrolled environments. However, the extension of factorisation algorithms for rigid structure from motion to the low-rank non-rigid case has proved challenging. This stems from the comparatively hard problem of finding a linear “corrective transform” which recovers the projection and structure matrices from an ambiguous factorisation. We elucidate that this greater difficulty is due to the need to find multiple solutions to a non-trivial problem, casting a number of previous approaches as alleviating this issue by either a) introducing constraints on the basis, making the problems nonidentical, or b) incorporating heuristics to encourage a diverse set of solutions, making the problems inter-dependent. While it has previously been recognised that finding a single solution to this problem is sufficient to estimate cameras, we show that it is possible to bootstrap this partial solution to find the complete transform in closed-form. However, we acknowledge that our method minimises an algebraic error and is thus inherently sensitive to deviation from the low-rank model. We compare our closed-form solution for non-rigid structure with known cameras to the closed-form solution of Dai et al. [1], which we find to produce only coplanar reconstructions. We therefore make the recommendation that 3D reconstruction error always be measured relative to a trivial reconstruction such as a planar one.
Resumo:
In an effort to develop a fully computerized approach for structural synthesis of kinematic chains the steps involved in the method of structural synthesis based on transformation of binary chains [38] have been recast in a format suitable for implementation on a digital computer. The methodology thus evolved has been combined with the algebraic procedures for structural analysis [44] to develop a unified computer program for structural synthesis and analysis of simple jointed kinematic chains with a degree of freedom 0. Applications of this program are presented in the succeeding parts of the paper.
Resumo:
Numerically discretized dynamic optimization problems having active inequality and equality path constraints that along with the dynamics induce locally high index differential algebraic equations often cause the optimizer to fail in convergence or to produce degraded control solutions. In many applications, regularization of the numerically discretized problem in direct transcription schemes by perturbing the high index path constraints helps the optimizer to converge to usefulm control solutions. For complex engineering problems with many constraints it is often difficult to find effective nondegenerat perturbations that produce useful solutions in some neighborhood of the correct solution. In this paper we describe a numerical discretization that regularizes the numerically consistent discretized dynamics and does not perturb the path constraints. For all values of the regularization parameter the discretization remains numerically consistent with the dynamics and the path constraints specified in the, original problem. The regularization is quanti. able in terms of time step size in the mesh and the regularization parameter. For full regularized systems the scheme converges linearly in time step size.The method is illustrated with examples.
Resumo:
From Arithmetic to Algebra. Changes in the skills in comprehensive school over 20 years. In recent decades we have emphasized the understanding of calculation in mathematics teaching. Many studies have found that better understanding helps to apply skills in new conditions and that the ability to think on an abstract level increases the transfer to new contexts. In my research I take into consideration competence as a matrix where content is in a horizontal line and levels of thinking are in a vertical line. The know-how is intellectual and strategic flexibility and understanding. The resources and limitations of memory have their effects on learning in different ways in different phases. Therefore both flexible conceptual thinking and automatization must be considered in learning. The research questions that I examine are what kind of changes have occurred in mathematical skills in comprehensive school over the last 20 years and what kind of conceptual thinking is demonstrated by students in this decade. The study consists of two parts. The first part is a statistical analysis of the mathematical skills and their changes over the last 20 years in comprehensive school. In the test the pupils did not use calculators. The second part is a qualitative analysis of the conceptual thinking of pupils in comprehensive school in this decade. The study shows significant differences in algebra and in some parts of arithmetic. The largest differences were detected in the calculation skills of fractions. In the 1980s two out of three pupils were able to complete tasks with fractions, but in the 2000s only one out of three pupils were able to do the same tasks. Also remarkable is that out of the students who could complete the tasks with fractions, only one out of three pupils was on the conceptual level in his/her thinking. This means that about 10% of pupils are able to understand the algebraic expression, which has the same isomorphic structure as the arithmetical expression. This finding is important because the ability to think innovatively is created when learning the basic concepts. Keywords: arithmetic, algebra, competence
Resumo:
Using the link-link incidence matrix to represent a simple-jointed kinematic chain algebraic procedures have been developed to determine its structural characteristics such as the type of freedom of the chain, the number of distinct mechanisms and driving mechanisms that can be derived from the chain. A computer program incorporating these graph theory based procedures has been applied successfully for the structural analysis of several typical chains.
Resumo:
The “partition method” or “sub-domain method” consists of expressing the solution of a governing differential equation, partial or ordinary, in terms of functions which satisfy the boundary conditions and setting to zero the error in the differential equation integrated over each of the sub-domains into which the given domain is partitioned. In this paper, the use of this method in eigenvalue problems with particular reference to vibration of plates is investigated. The deflection of the plate is expressed in terms of polynomials satisfying the boundary conditions completely. Setting the integrated error in each of the subdomains to zero results in a set of simultaneous, linear, homogeneous, algebraic equations in the undetermined coefficients of the deflection series. The algebraic eigenvalue problem is then solved for eigenvalues and eigenvectors. Convergence is examined in a few typical cases and is found to be satisfactory. The results obtained are compared with existing results based on other methods and are found to be in very good agreement.
Resumo:
The “partition method” or “sub-domain method” consists of expressing the solution of a governing differential equation, partial or ordinary, in terms of functions which satisfy the boundary conditions and setting to zero the error in the differential equation integrated over each of the sub-domains into which the given domain is partitioned. In this paper, the use of this method in eigenvalue problems with particular reference to vibration of plates is investigated. The deflection of the plate is expressed in terms of polynomials satisfying the boundary conditions completely. Setting the integrated error in each of the subdomains to zero results in a set of simultaneous, linear, homogeneous, algebraic equations in the undetermined coefficients of the deflection series. The algebraic eigenvalue problem is then solved for eigenvalues and eigenvectors. Convergence is examined in a few typical cases and is found to be satisfactory. The results obtained are compared with existing results based on other methods and are found to be in very good agreement.
Resumo:
This paper discusses the consistent regularization property of the generalized α method when applied as an integrator to an initial value high index and singular differential-algebraic equation model of a multibody system. The regularization comes from within the discretization itself and the discretization remains consistent over the range of values the regularization parameter may take. The regularization involves increase of the smallest singular values of the ill-conditioned Jacobian of the discretization and is different from Baumgarte and similar techniques which tend to be inconsistent for poor choice of regularization parameter. This regularization also helps where pre-conditioning the Jacobian by scaling is of limited effect, for example, when the scleronomic constraints contain multiple closed loops or singular configuration or when high index path constraints are present. The feed-forward control in Kane's equation models is additionally considered in the numerical examples to illustrate the effect of regularization. The discretization presented in this work is adopted to the first order DAE system (unlike the original method which is intended for second order systems) for its A-stability and same order of accuracy for positions and velocities.
Resumo:
This paper describes the application of vector spaces over Galois fields, for obtaining a formal description of a picture in the form of a very compact, non-redundant, unique syntactic code. Two different methods of encoding are described. Both these methods consist in identifying the given picture as a matrix (called picture matrix) over a finite field. In the first method, the eigenvalues and eigenvectors of this matrix are obtained. The eigenvector expansion theorem is then used to reconstruct the original matrix. If several of the eigenvalues happen to be zero this scheme results in a considerable compression. In the second method, the picture matrix is reduced to a primitive diagonal form (Hermite canonical form) by elementary row and column transformations. These sequences of elementary transformations constitute a unique and unambiguous syntactic code-called Hermite code—for reconstructing the picture from the primitive diagonal matrix. A good compression of the picture results, if the rank of the matrix is considerably lower than its order. An important aspect of this code is that it preserves the neighbourhood relations in the picture and the primitive remains invariant under translation, rotation, reflection, enlargement and replication. It is also possible to derive the codes for these transformed pictures from the Hermite code of the original picture by simple algebraic manipulation. This code will find extensive applications in picture compression, storage, retrieval, transmission and in designing pattern recognition and artificial intelligence systems.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.
Resumo:
We explore here the acceleration of convergence of iterative methods for the solution of a class of quasilinear and linear algebraic equations. The specific systems are the finite difference form of the Navier-Stokes equations and the energy equation for recirculating flows. The acceleration procedures considered are: the successive over relaxation scheme; several implicit methods; and a second-order procedure. A new implicit method—the alternating direction line iterative method—is proposed in this paper. The method combines the advantages of the line successive over relaxation and alternating direction implicit methods. The various methods are tested for their computational economy and accuracy on a typical recirculating flow situation. The numerical experiments show that the alternating direction line iterative method is the most economical method of solving the Navier-Stokes equations for all Reynolds numbers in the laminar regime. The usual ADI method is shown to be not so attractive for large Reynolds numbers because of the loss of diagonal dominance. This loss can however be restored by a suitable choice of the relaxation parameter, but at the cost of accuracy. The accuracy of the new procedure is comparable to that of the well-tested successive overrelaxation method and to the available results in the literature. The second-order procedure turns out to be the most efficient method for the solution of the linear energy equation.
Resumo:
The paper deals with a rational approach to the development of general design criteria for non-dissipative vibration isolation systems. The study covers straight-through springmass systems as well as branched ones with dynamic absorbers. Various design options, such as the addition of another spring-mass pair, replacement of an existing system by one with more spring-mass pairs for the same space and material requirements, provision of one or more dynamic absorbers for the desired frequency range, etc., are investigated quantitatively by means of an algebraic algorithm which enables one to write down straightaway the velocity ratio and hence transmissibility of a linear dynamical system in terms of the constituent parameters.
Resumo:
A computational model for isothermal axisymmetric turbulent flow in a quarl burner is set up using the CFD package FLUENT, and numerical solutions obtained from the model are compared with available experimental data. A standard k-e model and and two versions of the RNG k-e model are used to model the turbulence. One of the aims of the computational study is to investigate whether the RNG based k-e turbulence models are capable of yielding improved flow predictions compared with the standard k-e turbulence model. A difficulty is that the flow considered here features a confined vortex breakdown which can be highly sensitive to flow behaviour both upstream and downstream of the breakdown zone. Nevertheless, the relatively simple confining geometry allows us to undertake a systematic study so that both grid-independent and domain-independent results can be reported. The systematic study includes a detailed investigation of the effects of upstream and downstream conditions on the predictions, in addition to grid refinement and other tests to ensure that numerical error is not significant. Another important aim is to determine to what extent the turbulence model predictions can provide us with new insights into the physics of confined vortex breakdown flows. To this end, the computations are discussed in detail with reference to known vortex breakdown phenomena and existing theories. A major conclusion is that one of the RNG k-e models investigated here is able to correctly capture the complex forward flow region inside the recirculating breakdown zone. This apparently pathological result is in stark contrast to the findings of previous studies, most of which have concluded that either algebraic or differential Reynolds stress modelling is needed to correctly predict the observed flow features. Arguments are given as to why an isotropic eddy-viscosity turbulence model may well be able to capture the complex flow structure within the recirculating zone for this flow setup. With regard to the flow physics, a major finding is that the results obtained here are more consistent with the view that confined vortex breakdown is a type of axisymmetric boundary layer separation, rather than a manifestation of a subcritical flow state.