968 resultados para Polynomial approximation
Resumo:
Intensity and volume of training in Artisti Gymnastics are increasing as the sooner athlete's age of incorporation creating some disturbance in them.
Resumo:
La presente Tesis Doctoral aborda la introducción de la Partición de Unidad de Bernstein en la forma débil de Galerkin para la resolución de problemas de condiciones de contorno en el ámbito del análisis estructural. La familia de funciones base de Bernstein conforma un sistema generador del espacio de funciones polinómicas que permite construir aproximaciones numéricas para las que no se requiere la existencia de malla: las funciones de forma, de soporte global, dependen únicamente del orden de aproximación elegido y de la parametrización o mapping del dominio, estando las posiciones nodales implícitamente definidas. El desarrollo de la formulación está precedido por una revisión bibliográfica que, con su punto de partida en el Método de Elementos Finitos, recorre las principales técnicas de resolución sin malla de Ecuaciones Diferenciales en Derivadas Parciales, incluyendo los conocidos como Métodos Meshless y los métodos espectrales. En este contexto, en la Tesis se somete la aproximación Bernstein-Galerkin a validación en tests uni y bidimensionales clásicos de la Mecánica Estructural. Se estudian aspectos de la implementación tales como la consistencia, la capacidad de reproducción, la naturaleza no interpolante en la frontera, el planteamiento con refinamiento h-p o el acoplamiento con otras aproximaciones numéricas. Un bloque importante de la investigación se dedica al análisis de estrategias de optimización computacional, especialmente en lo referente a la reducción del tiempo de máquina asociado a la generación y operación con matrices llenas. Finalmente, se realiza aplicación a dos casos de referencia de estructuras aeronáuticas, el análisis de esfuerzos en un angular de material anisotrópico y la evaluación de factores de intensidad de esfuerzos de la Mecánica de Fractura mediante un modelo con Partición de Unidad de Bernstein acoplada a una malla de elementos finitos. ABSTRACT This Doctoral Thesis deals with the introduction of Bernstein Partition of Unity into Galerkin weak form to solve boundary value problems in the field of structural analysis. The family of Bernstein basis functions constitutes a spanning set of the space of polynomial functions that allows the construction of numerical approximations that do not require the presence of a mesh: the shape functions, which are globally-supported, are determined only by the selected approximation order and the parametrization or mapping of the domain, being the nodal positions implicitly defined. The exposition of the formulation is preceded by a revision of bibliography which begins with the review of the Finite Element Method and covers the main techniques to solve Partial Differential Equations without the use of mesh, including the so-called Meshless Methods and the spectral methods. In this context, in the Thesis the Bernstein-Galerkin approximation is subjected to validation in one- and two-dimensional classic benchmarks of Structural Mechanics. Implementation aspects such as consistency, reproduction capability, non-interpolating nature at boundaries, h-p refinement strategy or coupling with other numerical approximations are studied. An important part of the investigation focuses on the analysis and optimization of computational efficiency, mainly regarding the reduction of the CPU cost associated with the generation and handling of full matrices. Finally, application to two reference cases of aeronautic structures is performed: the stress analysis in an anisotropic angle part and the evaluation of stress intensity factors of Fracture Mechanics by means of a coupled Bernstein Partition of Unity - finite element mesh model.
Resumo:
A quasi-cylindrical approximation is used to analyse the axisymmetric swirling flow of a liquid with a hollow air core in the chamber of a pressure swirl atomizer. The liquid is injected into the chamber with an azimuthal velocity component through a number of slots at the periphery of one end of the chamber, and flows out as an anular sheet through a central orifice at the other end, following a conical convergence of the chamber wall. An effective inlet condition is used to model the effects of the slots and the boundary layer that develops at the nearby endwall of the chamber. An analysis is presented of the structure of the liquid sheet at the end of the exit orifice, where the flow becomes critical in the sense that upstream propagation of long-wave perturbations ceases to be possible. This nalysis leads to a boundary condition at the end of the orifice that is an extension of the condition of maximum flux used with irrotational models of the flow. As is well known, the radial pressure gradient induced by the swirling flow in the bulk of the chamber causes the overpressure that drives the liquid towards the exit orifice, and also leads to Ekman pumping in the boundary layers of reduced azimuthal velocity at the convergent wall of the chamber and at the wall opposite to the exit orifice. The numerical results confirm the important role played by the boundary layers. They make the thickness of the liquid sheet at the end of the orifice larger than predicted by rrotational models, and at the same time tend to decrease the overpressure required to pass a given flow rate through the chamber, because the large axial velocity in the boundary layers takes care of part of the flow rate. The thickness of the boundary layers increases when the atomizer constant (the inverse of a swirl number, proportional to the flow rate scaled with the radius of the exit orifice and the circulation around the air core) decreases. A minimum value of this parameter is found below which the layer of reduced azimuthal velocity around the air core prevents the pressure from increasing and steadily driving the flow through the exit orifice. The effects of other parameters not accounted for by irrotational models are also analysed in terms of their influence on the boundary layers.
Resumo:
Social behavior is mainly based on swarm colonies, in which each individual shares its knowledge about the environment with other individuals to get optimal solutions. Such co-operative model differs from competitive models in the way that individuals die and are born by combining information of alive ones. This paper presents the particle swarm optimization with differential evolution algorithm in order to train a neural network instead the classic back propagation algorithm. The performance of a neural network for particular problems is critically dependant on the choice of the processing elements, the net architecture and the learning algorithm. This work is focused in the development of methods for the evolutionary design of artificial neural networks. This paper focuses in optimizing the topology and structure of connectivity for these networks
Resumo:
The segmental approach has been considered to analyze dark and light I-V curves. The photovoltaic (PV) dependence of the open-circuit voltage (Voc), the maximum power point voltage (Vm), the efficiency (?) on the photogenerated current (Jg), or on the sunlight concentration ratio (X), are analyzed, as well as other photovoltaic characteristics of multijunction solar cells. The characteristics being analyzed are split into monoexponential (linear in the semilogarithmic scale) portions, each of which is characterized by a definite value of the ideality factor A and preexponential current J0. The monoexponentiality ensures advantages, since at many steps of the analysis, one can use the analytical dependences instead of numerical methods. In this work, an experimental procedure for obtaining the necessary parameters has been proposed, and an analysis of GaInP/GaInAs/Ge triple-junction solar cell characteristics has been carried out. It has been shown that up to the sunlight concentration ratios, at which the efficiency maximum is achieved, the results of calculation of dark and light I-V curves by the segmental method fit well with the experimental data. An important consequence of this work is the feasibility of acquiring the resistanceless dark and light I-V curves, which can be used for obtaining the I-V curves characterizing the losses in the transport part of a solar cell.
Resumo:
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Resumo:
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science.
Resumo:
We investigated how human subjects adapt to forces perturbing the motion of their ams. We found that this kind of learning is based on the capacity of the central nervous system (CNS) to predict and therefore to cancel externally applied perturbing forces. Our experimental results indicate: (i) that the ability of the CNS to compensate for the perturbing forces is restricted to those spatial locations where the perturbations have been experienced by the moving arm. The subjects also are able to compensate for forces experienced at neighboring workspace locations. However, adaptation decays smoothly and quickly with distance from the locations where disturbances had been sensed by the moving limb. (ii) Our experiments also how that the CNS builds an internal model of the external perturbing forces in intrinsic (muscles and / or joints) coordinates.
Resumo:
We compute the E-polynomial of the character variety of representations of a rank r free group in SL(3,C). Expanding upon techniques of Logares, Muñoz and Newstead (Rev. Mat. Complut. 26:2 (2013), 635-703), we stratify the space of representations and compute the E-polynomial of each geometrically described stratum using fibrations. Consequently, we also determine the E-polynomial of its smooth, singular, and abelian loci and the corresponding Euler characteristic in each case. Along the way, we give a new proof of results of Cavazos and Lawton (Int. J. Math. 25:6 (2014), 1450058).
Resumo:
Efficient hardware implementations of arithmetic operations in the Galois field are highly desirable for several applications, such as coding theory, computer algebra and cryptography. Among these operations, multiplication is of special interest because it is considered the most important building block. Therefore, high-speed algorithms and hardware architectures for computing multiplication are highly required. In this paper, bit-parallel polynomial basis multipliers over the binary field GF(2(m)) generated using type II irreducible pentanomials are considered. The multiplier here presented has the lowest time complexity known to date for similar multipliers based on this type of irreducible pentanomials.
Resumo:
In the context of real-valued functions defined on metric spaces, it is known that the locally Lipschitz functions are uniformly dense in the continuous functions and that the Lipschitz in the small functions - the locally Lipschitz functions where both the local Lipschitz constant and the size of the neighborhood can be chosen independent of the point - are uniformly dense in the uniformly continuous functions. Between these two basic classes of continuous functions lies the class of Cauchy continuous functions, i.e., the functions that map Cauchy sequences in the domain to Cauchy sequences in the target space. Here, we exhibit an intermediate class of Cauchy continuous locally Lipschitz functions that is uniformly dense in the real-valued Cauchy continuous functions. In fact, our result is valid when our target space is an arbitrary Banach space.
Resumo:
We address the optimization of discrete-continuous dynamic optimization problems using a disjunctive multistage modeling framework, with implicit discontinuities, which increases the problem complexity since the number of continuous phases and discrete events is not known a-priori. After setting a fixed alternative sequence of modes, we convert the infinite-dimensional continuous mixed-logic dynamic (MLDO) problem into a finite dimensional discretized GDP problem by orthogonal collocation on finite elements. We use the Logic-based Outer Approximation algorithm to fully exploit the structure of the GDP representation of the problem. This modelling framework is illustrated with an optimization problem with implicit discontinuities (diver problem).
Resumo:
We present an extension of the logic outer-approximation algorithm for dealing with disjunctive discrete-continuous optimal control problems whose dynamic behavior is modeled in terms of differential-algebraic equations. Although the proposed algorithm can be applied to a wide variety of discrete-continuous optimal control problems, we are mainly interested in problems where disjunctions are also present. Disjunctions are included to take into account only certain parts of the underlying model which become relevant under some processing conditions. By doing so the numerical robustness of the optimization algorithm improves since those parts of the model that are not active are discarded leading to a reduced size problem and avoiding potential model singularities. We test the proposed algorithm using three examples of different complex dynamic behavior. In all the case studies the number of iterations and the computational effort required to obtain the optimal solutions is modest and the solutions are relatively easy to find.
Resumo:
In this article we present a model of organization of a belief system based on a set of binary recursive functions that characterize the dynamic context that modifies the beliefs. The initial beliefs are modeled by a set of two-bit words that grow, update, and generate other beliefs as the different experiences of the dynamic context appear. Reason is presented as an emergent effect of the experience on the beliefs. The system presents a layered structure that allows a functional organization of the belief system. Our approach seems suitable to model different ways of thinking and to apply to different realistic scenarios such as ideologies.