922 resultados para Polynomial approximation
Resumo:
Let D be a link diagram with n crossings, sA and sB be its extreme states and |sAD| (respectively, |sBD|) be the number of simple closed curves that appear when smoothing D according to sA (respectively, sB). We give a general formula for the sum |sAD| + |sBD| for a k-almost alternating diagram D, for any k, characterizing this sum as the number of faces in an appropriate triangulation of an appropriate surface with boundary. When D is dealternator connected, the triangulation is especially simple, yielding |sAD| + |sBD| = n + 2 - 2k. This gives a simple geometric proof of the upper bound of the span of the Jones polynomial for dealternator connected diagrams, a result first obtained by Zhu [On Kauffman brackets, J. Knot Theory Ramifications6(1) (1997) 125–148.]. Another upper bound of the span of the Jones polynomial for dealternator connected and dealternator reduced diagrams, discovered historically first by Adams et al. [Almost alternating links, Topology Appl.46(2) (1992) 151–165.], is obtained as a corollary. As a new application, we prove that the Turaev genus is equal to the number k of dealternator crossings for any dealternator connected diagram
Resumo:
For centuries, earth has been used as a construction material. Nevertheless, the normative in this matter is very scattered, and the most developed countries, to carry out a construction with this material implies a variety of technical and legal problems. In this paper we review, in an international level, the normative panorama about earth constructions. It analyzes ninety one standards and regulations of countries all around the five continents. These standards represent the state of art that normalizes the earth as a construction material. In this research we analyze the international standards to earth construction, focusing on durability test (spray and drip erosion tests). It analyzes the differences between methods of test. Also we show all results about these tests in two types of compressed earth block.
Resumo:
We introduce a diffusion-based algorithm in which multiple agents cooperate to predict a common and global statevalue function by sharing local estimates and local gradient information among neighbors. Our algorithm is a fully distributed implementation of the gradient temporal difference with linear function approximation, to make it applicable to multiagent settings. Simulations illustrate the benefit of cooperation in learning, as made possible by the proposed algorithm.
Resumo:
We present analytical formulas to estimate the variation of achieved deflection for an Earth-impacting asteroid following a continuous tangential low-thrust deflection strategy. Relatively simple analytical expressions are obtained with the aid of asymptotic theory and the use of Peláez orbital elements set, an approach that is particularly suitable to the asteroid deflection problem and is not limited to small eccentricities. The accuracy of the proposed formulas is evaluated numerically showing negligible error for both early and late deflection campaigns. The results will be of aid in planning future low-thrust asteroid deflection missions
Resumo:
This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.
Resumo:
Intensity and volume of training in Artisti Gymnastics are increasing as the sooner athlete's age of incorporation creating some disturbance in them.
Resumo:
La presente Tesis Doctoral aborda la introducción de la Partición de Unidad de Bernstein en la forma débil de Galerkin para la resolución de problemas de condiciones de contorno en el ámbito del análisis estructural. La familia de funciones base de Bernstein conforma un sistema generador del espacio de funciones polinómicas que permite construir aproximaciones numéricas para las que no se requiere la existencia de malla: las funciones de forma, de soporte global, dependen únicamente del orden de aproximación elegido y de la parametrización o mapping del dominio, estando las posiciones nodales implícitamente definidas. El desarrollo de la formulación está precedido por una revisión bibliográfica que, con su punto de partida en el Método de Elementos Finitos, recorre las principales técnicas de resolución sin malla de Ecuaciones Diferenciales en Derivadas Parciales, incluyendo los conocidos como Métodos Meshless y los métodos espectrales. En este contexto, en la Tesis se somete la aproximación Bernstein-Galerkin a validación en tests uni y bidimensionales clásicos de la Mecánica Estructural. Se estudian aspectos de la implementación tales como la consistencia, la capacidad de reproducción, la naturaleza no interpolante en la frontera, el planteamiento con refinamiento h-p o el acoplamiento con otras aproximaciones numéricas. Un bloque importante de la investigación se dedica al análisis de estrategias de optimización computacional, especialmente en lo referente a la reducción del tiempo de máquina asociado a la generación y operación con matrices llenas. Finalmente, se realiza aplicación a dos casos de referencia de estructuras aeronáuticas, el análisis de esfuerzos en un angular de material anisotrópico y la evaluación de factores de intensidad de esfuerzos de la Mecánica de Fractura mediante un modelo con Partición de Unidad de Bernstein acoplada a una malla de elementos finitos. ABSTRACT This Doctoral Thesis deals with the introduction of Bernstein Partition of Unity into Galerkin weak form to solve boundary value problems in the field of structural analysis. The family of Bernstein basis functions constitutes a spanning set of the space of polynomial functions that allows the construction of numerical approximations that do not require the presence of a mesh: the shape functions, which are globally-supported, are determined only by the selected approximation order and the parametrization or mapping of the domain, being the nodal positions implicitly defined. The exposition of the formulation is preceded by a revision of bibliography which begins with the review of the Finite Element Method and covers the main techniques to solve Partial Differential Equations without the use of mesh, including the so-called Meshless Methods and the spectral methods. In this context, in the Thesis the Bernstein-Galerkin approximation is subjected to validation in one- and two-dimensional classic benchmarks of Structural Mechanics. Implementation aspects such as consistency, reproduction capability, non-interpolating nature at boundaries, h-p refinement strategy or coupling with other numerical approximations are studied. An important part of the investigation focuses on the analysis and optimization of computational efficiency, mainly regarding the reduction of the CPU cost associated with the generation and handling of full matrices. Finally, application to two reference cases of aeronautic structures is performed: the stress analysis in an anisotropic angle part and the evaluation of stress intensity factors of Fracture Mechanics by means of a coupled Bernstein Partition of Unity - finite element mesh model.
Resumo:
A quasi-cylindrical approximation is used to analyse the axisymmetric swirling flow of a liquid with a hollow air core in the chamber of a pressure swirl atomizer. The liquid is injected into the chamber with an azimuthal velocity component through a number of slots at the periphery of one end of the chamber, and flows out as an anular sheet through a central orifice at the other end, following a conical convergence of the chamber wall. An effective inlet condition is used to model the effects of the slots and the boundary layer that develops at the nearby endwall of the chamber. An analysis is presented of the structure of the liquid sheet at the end of the exit orifice, where the flow becomes critical in the sense that upstream propagation of long-wave perturbations ceases to be possible. This nalysis leads to a boundary condition at the end of the orifice that is an extension of the condition of maximum flux used with irrotational models of the flow. As is well known, the radial pressure gradient induced by the swirling flow in the bulk of the chamber causes the overpressure that drives the liquid towards the exit orifice, and also leads to Ekman pumping in the boundary layers of reduced azimuthal velocity at the convergent wall of the chamber and at the wall opposite to the exit orifice. The numerical results confirm the important role played by the boundary layers. They make the thickness of the liquid sheet at the end of the orifice larger than predicted by rrotational models, and at the same time tend to decrease the overpressure required to pass a given flow rate through the chamber, because the large axial velocity in the boundary layers takes care of part of the flow rate. The thickness of the boundary layers increases when the atomizer constant (the inverse of a swirl number, proportional to the flow rate scaled with the radius of the exit orifice and the circulation around the air core) decreases. A minimum value of this parameter is found below which the layer of reduced azimuthal velocity around the air core prevents the pressure from increasing and steadily driving the flow through the exit orifice. The effects of other parameters not accounted for by irrotational models are also analysed in terms of their influence on the boundary layers.
Resumo:
Social behavior is mainly based on swarm colonies, in which each individual shares its knowledge about the environment with other individuals to get optimal solutions. Such co-operative model differs from competitive models in the way that individuals die and are born by combining information of alive ones. This paper presents the particle swarm optimization with differential evolution algorithm in order to train a neural network instead the classic back propagation algorithm. The performance of a neural network for particular problems is critically dependant on the choice of the processing elements, the net architecture and the learning algorithm. This work is focused in the development of methods for the evolutionary design of artificial neural networks. This paper focuses in optimizing the topology and structure of connectivity for these networks
Resumo:
The segmental approach has been considered to analyze dark and light I-V curves. The photovoltaic (PV) dependence of the open-circuit voltage (Voc), the maximum power point voltage (Vm), the efficiency (?) on the photogenerated current (Jg), or on the sunlight concentration ratio (X), are analyzed, as well as other photovoltaic characteristics of multijunction solar cells. The characteristics being analyzed are split into monoexponential (linear in the semilogarithmic scale) portions, each of which is characterized by a definite value of the ideality factor A and preexponential current J0. The monoexponentiality ensures advantages, since at many steps of the analysis, one can use the analytical dependences instead of numerical methods. In this work, an experimental procedure for obtaining the necessary parameters has been proposed, and an analysis of GaInP/GaInAs/Ge triple-junction solar cell characteristics has been carried out. It has been shown that up to the sunlight concentration ratios, at which the efficiency maximum is achieved, the results of calculation of dark and light I-V curves by the segmental method fit well with the experimental data. An important consequence of this work is the feasibility of acquiring the resistanceless dark and light I-V curves, which can be used for obtaining the I-V curves characterizing the losses in the transport part of a solar cell.
Resumo:
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Resumo:
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science.
Resumo:
We investigated how human subjects adapt to forces perturbing the motion of their ams. We found that this kind of learning is based on the capacity of the central nervous system (CNS) to predict and therefore to cancel externally applied perturbing forces. Our experimental results indicate: (i) that the ability of the CNS to compensate for the perturbing forces is restricted to those spatial locations where the perturbations have been experienced by the moving arm. The subjects also are able to compensate for forces experienced at neighboring workspace locations. However, adaptation decays smoothly and quickly with distance from the locations where disturbances had been sensed by the moving limb. (ii) Our experiments also how that the CNS builds an internal model of the external perturbing forces in intrinsic (muscles and / or joints) coordinates.
Resumo:
We compute the E-polynomial of the character variety of representations of a rank r free group in SL(3,C). Expanding upon techniques of Logares, Muñoz and Newstead (Rev. Mat. Complut. 26:2 (2013), 635-703), we stratify the space of representations and compute the E-polynomial of each geometrically described stratum using fibrations. Consequently, we also determine the E-polynomial of its smooth, singular, and abelian loci and the corresponding Euler characteristic in each case. Along the way, we give a new proof of results of Cavazos and Lawton (Int. J. Math. 25:6 (2014), 1450058).
Resumo:
Efficient hardware implementations of arithmetic operations in the Galois field are highly desirable for several applications, such as coding theory, computer algebra and cryptography. Among these operations, multiplication is of special interest because it is considered the most important building block. Therefore, high-speed algorithms and hardware architectures for computing multiplication are highly required. In this paper, bit-parallel polynomial basis multipliers over the binary field GF(2(m)) generated using type II irreducible pentanomials are considered. The multiplier here presented has the lowest time complexity known to date for similar multipliers based on this type of irreducible pentanomials.