977 resultados para Linear multistep methods
Resumo:
There has been significant interest in parallel execution models for logic programs which exploit Independent And-Parallelism (IAP). In these models, it is necessary to determine which goals are independent and therefore eligible for parallel execution and which goals have to wait for which others during execution. Although this can be done at run-time, it can imply a very heavy overhead. In this paper, we present three algorithms for automatic compiletime parallelization of logic programs using IAP. This is done by converting a clause into a graph-based computational form and then transforming this graph into linear expressions based on &-Prolog, a language for IAP. We also present an algorithm which, given a clause, determines if there is any loss of parallelism due to linearization, for the case in which only unconditional parallelism is desired. Finally, the performance of these annotation algorithms is discussed for some benchmark programs.
Resumo:
Non-linear physical systems of infinite extent are conveniently modelled using FE–BE coupling methods. By the combination of both methods, suitable use of the advantages of each one may be obtained. Several possibilities of FEM–BEM coupling and their performance in some practical cases are discussed in this paper. Parallelizable coupling algorithms based on domain decomposition are developed and compared with the most traditional coupling methods.
Resumo:
These slides present several 3-D reconstruction methods to obtain the geometric structure of a scene that is viewed by multiple cameras. We focus on the combination of the geometric modeling in the image formation process with the use of standard optimization tools to estimate the characteristic parameters that describe the geometry of the 3-D scene. In particular, linear, non-linear and robust methods to estimate the monocular and epipolar geometry are introduced as cornerstones to generate 3-D reconstructions with multiple cameras. Some examples of systems that use this constructive strategy are Bundler, PhotoSynth, VideoSurfing, etc., which are able to obtain 3-D reconstructions with several hundreds or thousands of cameras. En esta presentación se tratan varios métodos de reconstrucción 3-D para la obtención de la estructura geométrica de una escena que es visualizada por varias cámaras. Se enfatiza la combinación de modelado geométrico del proceso de formación de la imagen con el uso de herramientas estándar de optimización para estimar los parámetros característicos que describen la geometría de la escena 3-D. En concreto, se presentan métodos de estimación lineales, no lineales y robustos de las geometrías monocular y epipolar como punto de partida para generar reconstrucciones con tres o más cámaras. Algunos ejemplos de sistemas que utilizan este enfoque constructivo son Bundler, PhotoSynth, VideoSurfing, etc., los cuales, en la práctica pueden llegar a reconstruir una escena con varios cientos o miles de cámaras.
Resumo:
After the experience gained during the past years it seems clear that nonlinear analysis of bridges are very important to compute ductility demands and to localize potential hinges. This is specially true for irregular bridges in which it is not clear weather or not it is possible to use a linear computation followed by a correction using a behaviour factor. To simplify the numerical effort several approximate methods have been proposed. Among them, the so-called Dynamic Plastic Hinge Method in which an evolutionary shape function is used to reduce the structure to a single degree of freedom system seems to mantein a good balance between accuracy and simplicity. This paper presents results obtained in a parametric study conducted under the auspicies of PREC-8 european research program.
Resumo:
It is well known that the evaluation of the influence matrices in the boundary-element method requires the computation of singular integrals. Quadrature formulae exist which are especially tailored to the specific nature of the singularity, i.e. log(*- x0)9 Ijx- JC0), etc. Clearly the nodes and weights of these formulae vary with the location Xo of the singular point. A drawback of this approach is that a given problem usually includes different types of singularities, and therefore a general-purpose code would have to include many alternative formulae to cater for all possible cases. Recently, several authors1"3 have suggested a type independent alternative technique based on the combination of standard Gaussian rules with non-linear co-ordinate transformations. The transformation approach is particularly appealing in connection with the p.adaptive version, where the location of the collocation points varies at each step of the refinement process. The purpose of this paper is to analyse the technique in eference 3. We show that this technique is asymptotically correct as the number of Gauss points increases. However, the method possesses a 'hidden' source of error that is analysed and can easily be removed.
Resumo:
Agro-areas of Arroyos Menores (La Colacha) west and south of Rand south of R?o Cuarto (Prov. of Cordoba, Argentina) basins are very fertile but have high soil loses. Extreme rain events, inundations and other severe erosions forming gullies demand urgently actions in this area to avoid soil degradation and erosion supporting good levels of agro production. The authors first improved hydrologic data on La Colacha, evaluated the systems of soil uses and actions that could be recommended considering the relevant aspects of the study area and applied decision support systems (DSS) with mathematic tools for planning of defences and uses of soils in these areas. These were conducted here using multi-criteria models, in multi-criteria decision making (MCDM); first of discrete MCDM to chose among global types of use of soils, and then of continuous MCDM to evaluate and optimize combined actions, including repartition of soil use and the necessary levels of works for soil conservation and for hydraulic management to conserve against erosion these basins. Relatively global solutions for La Colacha area have been defined and were optimised by Linear Programming in Goal Programming forms that are presented as Weighted or Lexicographic Goal Programming and as Compromise Programming. The decision methods used are described, indicating algorithms used, and examples for some representative scenarios on La Colacha area are given.
Resumo:
Non linear transformations are a good alternative for the numerical evaluation of singular and quasisingular integrals appearing in Boundary Element Method specially in the p-adaptive version. Some aspects of its numerical implementation in 2-D Potential codes is discussed and some examples are shown.
Resumo:
A unified solution framework is presented for one-, two- or three-dimensional complex non-symmetric eigenvalue problems, respectively governing linear modal instability of incompressible fluid flows in rectangular domains having two, one or no homogeneous spatial directions. The solution algorithm is based on subspace iteration in which the spatial discretization matrix is formed, stored and inverted serially. Results delivered by spectral collocation based on the Chebyshev-Gauss-Lobatto (CGL) points and a suite of high-order finite-difference methods comprising the previously employed for this type of work Dispersion-Relation-Preserving (DRP) and Padé finite-difference schemes, as well as the Summationby- parts (SBP) and the new high-order finite-difference scheme of order q (FD-q) have been compared from the point of view of accuracy and efficiency in standard validation cases of temporal local and BiGlobal linear instability. The FD-q method has been found to significantly outperform all other finite difference schemes in solving classic linear local, BiGlobal, and TriGlobal eigenvalue problems, as regards both memory and CPU time requirements. Results shown in the present study disprove the paradigm that spectral methods are superior to finite difference methods in terms of computational cost, at equal accuracy, FD-q spatial discretization delivering a speedup of ð (10 4). Consequently, accurate solutions of the three-dimensional (TriGlobal) eigenvalue problems may be solved on typical desktop computers with modest computational effort.
Resumo:
La inmensa mayoría de los flujos de relevancia ingenieril permanecen sin estudiar en el marco de la teoría de estabilidad global. Esto es debido a dos razones fundamentalmente, las dificultades asociadas con el análisis de los flujos turbulentos y los inmensos recursos computacionales requeridos para obtener la solución del problema de autovalores asociado al análisis de inestabilidad de flujos tridimensionales, también conocido como problema TriGlobal. En esta tesis se aborda el problema asociado con la tridimensionalidad. Se ha desarrollado una metodología general para obtener soluciones de problemas de análisis modal de las inestabilidades lineales globales mediante el acoplamiento de métodos de evolución temporal, desarrollados en este trabajo, con códigos de mecánica de fluidos computacional de segundo orden, utilizados de forma general en la industria. Esta metodología consiste en la resolución del problema de autovalores asociado al análisis de inestabilidad mediante métodos de proyección en subespacios de Krylov, con la particularidad de que dichos subespacios son generados por medio de la integración temporal de un vector inicial usando cualquier código de mecánica de fluidos computacional. Se han elegido tres problemas desafiantes en función de la exigencia de recursos computacionales necesarios y de la complejidad física para la demostración de la presente metodología: (i) el flujo en el interior de una cavidad tridimensional impulsada por una de sus tapas, (ii) el flujo alrededor de un cilindro equipado con aletas helicoidales a lo largo su envergadura y (iii) el flujo a través de una cavidad abierta tridimensinal en ausencia de homogeneidades espaciales. Para la validación de la tecnología se ha obtenido la solución del problema TriGlobal asociado al flujo en la cavidad tridimensional, utilizando el método de evolución temporal desarrollado acoplado con los operadores numéricos de flujo incompresible del código CFD OpenFOAM (código libre). Los resultados obtenidos coinciden plentamente con la literatura. La aplicación de esta metodología al estudio de inestabilidades globales de flujos abiertos tridimensionales ha proporcionado por primera vez, información sobre la transición tridimensional de estos flujos. Además, la metodología ha sido adaptada para resolver problemas adjuntos TriGlobales, permitiendo el control de flujo basado en modificaciones de las inestabilidades globales. Finalmente, se ha demostrado que la cantidad moderada de los recursos computacionales requeridos para la solución del problema de valor propio TriGlobal usando este método numérico, junto a su versatilidad al poder acoplarse a cualquier código aerodinámico, permite la realización de análisis de inestabilidad global y control de flujos complejos de relevancia industrial. Abstract Most flows of engineering relevance still remain unexplored in a global instability theory context for two reasons. First, because of the difficulties associated with the analysis of turbulent flows and, second, for the formidable computational resources required for the solution of the eigenvalue problem associated with the instability analysis of three-dimensional base flows, also known as TriGlobal problem. In this thesis, the problem associated with the three-dimensionality is addressed by means of the development of a general approach to the solution of large-scale global linear instability analysis by coupling a time-stepping approach with second order aerodynamic codes employed in industry. Three challenging flows in the terms of required computational resources and physical complexity have been chosen for demonstration of the present methodology; (i) the flow inside a wall-bounded three-dimensional lid-driven cavity, (ii) the flow past a cylinder fitted with helical strakes and (iii) the flow over a inhomogeneous three-dimensional open cavity. Results in excellent agreement with the literature have been obtained for the three-dimensional lid-driven cavity by using this methodology coupled with the incompressible solver of the open-source toolbox OpenFOAM®, which has served as validation. Moreover, significant physical insight of the instability of three-dimensional open flows has been gained through the application of the present time-stepping methodology to the other two cases. In addition, modifications to the present approach have been proposed in order to perform adjoint instability analysis of three-dimensional base flows and flow control; validation and TriGlobal examples are presented. Finally, it has been demonstrated that the moderate amount of computational resources required for the solution of the TriGlobal eigenvalue problem using this method enables the performance of instability analysis and control of flows of industrial relevance.
Resumo:
Machine and Statistical Learning techniques are used in almost all online advertisement systems. The problem of discovering which content is more demanded (e.g. receive more clicks) can be modeled as a multi-armed bandit problem. Contextual bandits (i.e., bandits with covariates, side information or associative reinforcement learning) associate, to each specific content, several features that define the “context” in which it appears (e.g. user, web page, time, region). This problem can be studied in the stochastic/statistical setting by means of the conditional probability paradigm using the Bayes’ theorem. However, for very large contextual information and/or real-time constraints, the exact calculation of the Bayes’ rule is computationally infeasible. In this article, we present a method that is able to handle large contextual information for learning in contextual-bandits problems. This method was tested in the Challenge on Yahoo! dataset at ICML2012’s Workshop “new Challenges for Exploration & Exploitation 3”, obtaining the second place. Its basic exploration policy is deterministic in the sense that for the same input data (as a time-series) the same results are obtained. We address the deterministic exploration vs. exploitation issue, explaining the way in which the proposed method deterministically finds an effective dynamic trade-off based solely in the input-data, in contrast to other methods that use a random number generator.
Resumo:
Two different methods of analysis of plate bending, FEM and BM are discussed in this paper. The plate behaviour is assumed to be represented by using the linear thin plate theory where the Poisson-Kirchoff assumption holds. The BM based in a weighted mean square error technique produced good results for the problem of plate bending. The computational effort demanded in the BM is smaller than the one needed in a FEM analysis for the same level of accuracy. The general application of the FEM cannot be matched by the BM. Particularly, different types of geometry (plates of arbitrary geometry) need a similar but not identical treatment in the BM. However, this loss of generality is counterbalanced by the computational efficiency gained in the BM in the solution achievement
Resumo:
Many image processing methods, such as techniques for people re-identification, assume photometric constancy between different images. This study addresses the correction of photometric variations based upon changes in background areas to correct foreground areas. The authors assume a multiple light source model where all light sources can have different colours and will change over time. In training mode, the authors learn per-location relations between foreground and background colour intensities. In correction mode, the authors apply a double linear correction model based on learned relations. This double linear correction includes a dynamic local illumination correction mapping as well as an inter-camera mapping. The authors evaluate their illumination correction by computing the similarity between two images based on the earth mover's distance. The authors compare the results to a representative auto-exposure algorithm found in the recent literature plus a colour correction one based on the inverse-intensity chromaticity. Especially in complex scenarios the authors’ method outperforms these state-of-the-art algorithms.
Resumo:
The Department of Structural Analysis of the University of Santander has been for a longtime involved in the solution of the country´s practical engineering problems. Some of these have required the use of non-conventional methods of analysis, in order to achieve adequate engineering answers. As an example of the increasing application of non-linear computer codes in the nowadays engineering practice, some cases will be briefly presented. In each case, only the main features of the problem involved and the solution used to solve it will be shown
Resumo:
El principal objetivo de esta tesis es el desarrollo de métodos de síntesis de diagramas de radiación de agrupaciones de antenas, en donde se realiza una caracterización electromagnética rigurosa de los elementos radiantes y de los acoplos mutuos existentes. Esta caracterización no se realiza habitualmente en la gran mayoría de métodos de síntesis encontrados en la literatura, debido fundamentalmente a dos razones. Por un lado, se considera que el diagrama de radiación de un array de antenas se puede aproximar con el factor de array que únicamente tiene en cuenta la posición de los elementos y las excitaciones aplicadas a los mismos. Sin embargo, como se mostrará en esta tesis, en múltiples ocasiones un riguroso análisis de los elementos radiantes y del acoplo mutuo entre ellos es importante ya que los resultados obtenidos pueden ser notablemente diferentes. Por otro lado, no es sencillo combinar un método de análisis electromagnético con un proceso de síntesis de diagramas de radiación. Los métodos de análisis de agrupaciones de antenas suelen ser costosos computacionalmente, ya que son estructuras grandes en términos de longitudes de onda. Generalmente, un diseño de un problema electromagnético suele comprender varios análisis de la estructura, dependiendo de las variaciones de las características, lo que hace este proceso muy costoso. Dos métodos se utilizan en esta tesis para el análisis de los arrays acoplados. Ambos están basados en el método de los elementos finitos, la descomposición de dominio y el análisis modal para analizar la estructura radiante y han sido desarrollados en el grupo de investigación donde se engloba esta tesis. El primero de ellos es una técnica de análisis de arrays finitos basado en la aproximación de array infinito. Su uso es indicado para arrays planos de grandes dimensiones con elementos equiespaciados. El segundo caracteriza el array y el acoplo mutuo entre elementos a partir de una expansión en modos esféricos del campo radiado por cada uno de los elementos. Este método calcula los acoplos entre los diferentes elementos del array usando las propiedades de traslación y rotación de los modos esféricos. Es capaz de analizar agrupaciones de elementos distribuidos de forma arbitraria. Ambas técnicas utilizan una formulación matricial que caracteriza de forma rigurosa el campo radiado por el array. Esto las hace muy apropiadas para su posterior uso en una herramienta de diseño, como los métodos de síntesis desarrollados en esta tesis. Los resultados obtenidos por estas técnicas de síntesis, que incluyen métodos rigurosos de análisis, son consecuentemente más precisos. La síntesis de arrays consiste en modificar uno o varios parámetros de las agrupaciones de antenas buscando unas determinadas especificaciones de las características de radiación. Los parámetros utilizados como variables de optimización pueden ser varios. Los más utilizados son las excitaciones aplicadas a los elementos, pero también es posible modificar otros parámetros de diseño como son las posiciones de los elementos o las rotaciones de estos. Los objetivos de las síntesis pueden ser dirigir el haz o haces en una determinada dirección o conformar el haz con formas arbitrarias. Además, es posible minimizar el nivel de los lóbulos secundarios o del rizado en las regiones deseadas, imponer nulos que evitan posibles interferencias o reducir el nivel de la componente contrapolar. El método para el análisis de arrays finitos basado en la aproximación de array infinito considera un array finito como un array infinito con un número finito de elementos excitados. Los elementos no excitados están físicamente presentes y pueden presentar tres diferentes terminaciones, corto-circuito, circuito abierto y adaptados. Cada una de estas terminaciones simulará mejor el entorno real en el que el array se encuentre. Este método de análisis se integra en la tesis con dos métodos diferentes de síntesis de diagramas de radiación. En el primero de ellos se presenta un método basado en programación lineal en donde es posible dirigir el haz o haces, en la dirección deseada, además de ejercer un control sobre los lóbulos secundarios o imponer nulos. Este método es muy eficiente y obtiene soluciones óptimas. El mismo método de análisis es también aplicado a un método de conformación de haz, en donde un problema originalmente no convexo (y de difícil solución) es transformado en un problema convexo imponiendo restricciones de simetría, resolviendo de este modo eficientemente un problema complejo. Con este método es posible diseñar diagramas de radiación con haces de forma arbitraria, ejerciendo un control en el rizado del lóbulo principal, así como en el nivel de los lóbulos secundarios. El método de análisis de arrays basado en la expansión en modos esféricos se integra en la tesis con tres técnicas de síntesis de diagramas de radiación. Se propone inicialmente una síntesis de conformación del haz basado en el método de la recuperación de fase resuelta de forma iterativa mediante métodos convexos, en donde relajando las restricciones del problema original se consiguen unas soluciones cercanas a las óptimas de manera eficiente. Dos métodos de síntesis se han propuesto, donde las variables de optimización son las posiciones y las rotaciones de los elementos respectivamente. Se define una función de coste basada en la intensidad de radiación, la cual es minimizada de forma iterativa con el método del gradiente. Ambos métodos reducen el nivel de los lóbulos secundarios minimizando una función de coste. El gradiente de la función de coste es obtenido en términos de la variable de optimización en cada método. Esta función de coste está formada por la expresión rigurosa de la intensidad de radiación y por una función de peso definida por el usuario para imponer prioridades sobre las diferentes regiones de radiación, si así se desea. Por último, se presenta un método en el cual, mediante técnicas de programación entera, se buscan las fases discretas que generan un diagrama de radiación lo más cercano posible al deseado. Con este método se obtienen diseños que minimizan el coste de fabricación. En cada uno de las diferentes técnicas propuestas en la tesis, se presentan resultados con elementos reales que muestran las capacidades y posibilidades que los métodos ofrecen. Se comparan los resultados con otros métodos disponibles en la literatura. Se muestra la importancia de tener en cuenta los diagramas de los elementos reales y los acoplos mutuos en el proceso de síntesis y se comparan los resultados obtenidos con herramientas de software comerciales. ABSTRACT The main objective of this thesis is the development of optimization methods for the radiation pattern synthesis of array antennas in which a rigorous electromagnetic characterization of the radiators and the mutual coupling between them is performed. The electromagnetic characterization is usually overlooked in most of the available synthesis methods in the literature, this is mainly due to two reasons. On the one hand, it is argued that the radiation pattern of an array is mainly influenced by the array factor and that the mutual coupling plays a minor role. As it is shown in this thesis, the mutual coupling and the rigorous characterization of the array antenna influences significantly in the array performance and its computation leads to differences in the results obtained. On the other hand, it is difficult to introduce an analysis procedure into a synthesis technique. The analysis of array antennas is generally expensive computationally as the structure to analyze is large in terms of wavelengths. A synthesis method requires to carry out a large number of analysis, this makes the synthesis problem very expensive computationally or intractable in some cases. Two methods have been used in this thesis for the analysis of coupled antenna arrays, both of them have been developed in the research group in which this thesis is involved. They are based on the finite element method (FEM), the domain decomposition and the modal analysis. The first one obtains a finite array characterization with the results obtained from the infinite array approach. It is specially indicated for the analysis of large arrays with equispaced elements. The second one characterizes the array elements and the mutual coupling between them with a spherical wave expansion of the radiated field by each element. The mutual coupling is computed using the properties of translation and rotation of spherical waves. This method is able to analyze arrays with elements placed on an arbitrary distribution. Both techniques provide a matrix formulation that makes them very suitable for being integrated in synthesis techniques, the results obtained from these synthesis methods will be very accurate. The array synthesis stands for the modification of one or several array parameters looking for some desired specifications of the radiation pattern. The array parameters used as optimization variables are usually the excitation weights applied to the array elements, but some other array characteristics can be used as well, such as the array elements positions or rotations. The desired specifications may be to steer the beam towards any specific direction or to generate shaped beams with arbitrary geometry. Further characteristics can be handled as well, such as minimize the side lobe level in some other radiating regions, to minimize the ripple of the shaped beam, to take control over the cross-polar component or to impose nulls on the radiation pattern to avoid possible interferences from specific directions. The analysis method based on the infinite array approach considers an infinite array with a finite number of excited elements. The infinite non-excited elements are physically present and may have three different terminations, short-circuit, open circuit and match terminated. Each of this terminations is a better simulation for the real environment of the array. This method is used in this thesis for the development of two synthesis methods. In the first one, a multi-objective radiation pattern synthesis is presented, in which it is possible to steer the beam or beams in desired directions, minimizing the side lobe level and with the possibility of imposing nulls in the radiation pattern. This method is very efficient and obtains optimal solutions as it is based on convex programming. The same analysis method is used in a shaped beam technique in which an originally non-convex problem is transformed into a convex one applying symmetry restrictions, thus solving a complex problem in an efficient way. This method allows the synthesis of shaped beam radiation patterns controlling the ripple in the mainlobe and the side lobe level. The analysis method based on the spherical wave expansion is applied for different synthesis techniques of the radiation pattern of coupled arrays. A shaped beam synthesis is presented, in which a convex formulation is proposed based on the phase retrieval method. In this technique, an originally non-convex problem is solved using a relaxation and solving a convex problems iteratively. Two methods are proposed based on the gradient method. A cost function is defined involving the radiation intensity of the coupled array and a weighting function that provides more degrees of freedom to the designer. The gradient of the cost function is computed with respect to the positions in one of them and the rotations of the elements in the second one. The elements are moved or rotated iteratively following the results of the gradient. A highly non-convex problem is solved very efficiently, obtaining very good results that are dependent on the starting point. Finally, an optimization method is presented where discrete digital phases are synthesized providing a radiation pattern as close as possible to the desired one. The problem is solved using linear integer programming procedures obtaining array designs that greatly reduce the fabrication costs. Results are provided for every method showing the capabilities that the above mentioned methods offer. The results obtained are compared with available methods in the literature. The importance of introducing a rigorous analysis into the synthesis method is emphasized and the results obtained are compared with a commercial software, showing good agreement.
Avaliação de métodos numéricos de análise linear de estabilidade para perfis de aço formados a frio.
Resumo:
Para o projeto de estruturas com perfis de aço formados a frio, é fundamental a compreensão dos fenômenos da instabilidade local e global, uma vez que estes apresentam alta esbeltez e baixa rigidez à torção. A determinação do carregamento crítico e a identificação do modo de instabilidade contribuem para o entendimento do comportamento dessas estruturas. Este trabalho avalia três metodologias para a análise linear de estabilidade de perfis de aço formados a frio isolados, com o objetivo de determinar os carregamentos críticos elásticos de bifurcação e os modos de instabilidade associados. Estritamente, analisa-se perfis de seção U enrijecido e Z enrijecido isolados, de diversos comprimentos e diferentes condições de vinculação e carregamento. Determinam-se os carregamentos críticos elásticos de bifurcação e os modos de instabilidade globais e locais por meio de: (i) análise com o Método das Faixas Finitas (MFF), através do uso do programa computacional CUFSM; (ii) análise com elementos finitos de barra baseados na Teoria Generalizada de Vigas (MEF-GBT), via uso do programa GBTUL; e (iii) análise com elementos finitos de casca (MEF-cascas) por meio do uso do programa ABAQUS. Algumas restrições e ressalvas com relação ao uso do MFF são apresentadas, assim como limitações da Teoria Generalizada de Viga e precauções a serem tomadas nos modelos de cascas. Analisa-se também a influência do grau de discretização da seção transversal. No entanto, não é feita avaliação em relação aos procedimentos normativos e tampouco análises não lineares, considerando as imperfeições geométricas iniciais, tensões residuais e o comportamento elastoplástico do material.