976 resultados para numerical algorithm
Resumo:
Markov Chain Monte Carlo methods are widely used in signal processing and communications for statistical inference and stochastic optimization. In this work, we introduce an efficient adaptive Metropolis-Hastings algorithm to draw samples from generic multimodal and multidimensional target distributions. The proposal density is a mixture of Gaussian densities with all parameters (weights, mean vectors and covariance matrices) updated using all the previously generated samples applying simple recursive rules. Numerical results for the one and two-dimensional cases are provided.
Resumo:
Dentro de los materiales estructurales, el magnesio y sus aleaciones están siendo el foco de una de profunda investigación. Esta investigación está dirigida a comprender la relación existente entre la microestructura de las aleaciones de Mg y su comportamiento mecánico. El objetivo es optimizar las aleaciones actuales de magnesio a partir de su microestructura y diseñar nuevas aleaciones. Sin embargo, el efecto de los factores microestructurales (como la forma, el tamaño, la orientación de los precipitados y la morfología de los granos) en el comportamiento mecánico de estas aleaciones está todavía por descubrir. Para conocer mejor de la relación entre la microestructura y el comportamiento mecánico, es necesaria la combinación de técnicas avanzadas de caracterización experimental como de simulación numérica, a diferentes longitudes de escala. En lo que respecta a las técnicas de simulación numérica, la homogeneización policristalina es una herramienta muy útil para predecir la respuesta macroscópica a partir de la microestructura de un policristal (caracterizada por el tamaño, la forma y la distribución de orientaciones de los granos) y el comportamiento del monocristal. La descripción de la microestructura se lleva a cabo mediante modernas técnicas de caracterización (difracción de rayos X, difracción de electrones retrodispersados, así como con microscopia óptica y electrónica). Sin embargo, el comportamiento del cristal sigue siendo difícil de medir, especialmente en aleaciones de Mg, donde es muy complicado conocer el valor de los parámetros que controlan el comportamiento mecánico de los diferentes modos de deslizamiento y maclado. En la presente tesis se ha desarrollado una estrategia de homogeneización computacional para predecir el comportamiento de aleaciones de magnesio. El comportamiento de los policristales ha sido obtenido mediante la simulación por elementos finitos de un volumen representativo (RVE) de la microestructura, considerando la distribución real de formas y orientaciones de los granos. El comportamiento del cristal se ha simulado mediante un modelo de plasticidad cristalina que tiene en cuenta los diferentes mecanismos físicos de deformación, como el deslizamiento y el maclado. Finalmente, la obtención de los parámetros que controlan el comportamiento del cristal (tensiones críticas resueltas (CRSS) así como las tasas de endurecimiento para todos los modos de maclado y deslizamiento) se ha resuelto mediante la implementación de una metodología de optimización inversa, una de las principales aportaciones originales de este trabajo. La metodología inversa pretende, por medio del algoritmo de optimización de Levenberg-Marquardt, obtener el conjunto de parámetros que definen el comportamiento del monocristal y que mejor ajustan a un conjunto de ensayos macroscópicos independientes. Además de la implementación de la técnica, se han estudiado tanto la objetividad del metodología como la unicidad de la solución en función de la información experimental. La estrategia de optimización inversa se usó inicialmente para obtener el comportamiento cristalino de la aleación AZ31 de Mg, obtenida por laminado. Esta aleación tiene una marcada textura basal y una gran anisotropía plástica. El comportamiento de cada grano incluyó cuatro mecanismos de deformación diferentes: deslizamiento en los planos basal, prismático, piramidal hc+ai, junto con el maclado en tracción. La validez de los parámetros resultantes se validó mediante la capacidad del modelo policristalino para predecir ensayos macroscópicos independientes en diferentes direcciones. En segundo lugar se estudió mediante la misma estrategia, la influencia del contenido de Neodimio (Nd) en las propiedades de una aleación de Mg-Mn-Nd, obtenida por extrusión. Se encontró que la adición de Nd produce una progresiva isotropización del comportamiento macroscópico. El modelo mostró que este incremento de la isotropía macroscópica era debido tanto a la aleatoriedad de la textura inicial como al incremento de la isotropía del comportamiento del cristal, con valores similares de las CRSSs de los diferentes modos de deformación. Finalmente, el modelo se empleó para analizar el efecto de la temperatura en el comportamiento del cristal de la aleación de Mg-Mn-Nd. La introducción en el modelo de los efectos non-Schmid sobre el modo de deslizamiento piramidal hc+ai permitió capturar el comportamiento mecánico a temperaturas superiores a 150_C. Esta es la primera vez, de acuerdo con el conocimiento del autor, que los efectos non-Schmid han sido observados en una aleación de Magnesio. The study of Magnesium and its alloys is a hot research topic in structural materials. In particular, special attention is being paid in understanding the relationship between microstructure and mechanical behavior in order to optimize the current alloy microstructures and guide the design of new alloys. However, the particular effect of several microstructural factors (precipitate shape, size and orientation, grain morphology distribution, etc.) in the mechanical performance of a Mg alloy is still under study. The combination of advanced characterization techniques and modeling at several length scales is necessary to improve the understanding of the relation microstructure and mechanical behavior. Respect to the simulation techniques, polycrystalline homogenization is a very useful tool to predict the macroscopic response from polycrystalline microstructure (grain size, shape and orientation distributions) and crystal behavior. The microstructure description is fully covered with modern characterization techniques (X-ray diffraction, EBSD, optical and electronic microscopy). However, the mechanical behaviour of single crystals is not well-known, especially in Mg alloys where the correct parameterization of the mechanical behavior of the different slip/twin modes is a very difficult task. A computational homogenization framework for predicting the behavior of Magnesium alloys has been developed in this thesis. The polycrystalline behavior was obtained by means of the finite element simulation of a representative volume element (RVE) of the microstructure including the actual grain shape and orientation distributions. The crystal behavior for the grains was accounted for a crystal plasticity model which took into account the physical deformation mechanisms, e.g. slip and twinning. Finally, the problem of the parametrization of the crystal behavior (critical resolved shear stresses (CRSS) and strain hardening rates of all the slip and twinning modes) was obtained by the development of an inverse optimization methodology, one of the main original contributions of this thesis. The inverse methodology aims at finding, by means of the Levenberg-Marquardt optimization algorithm, the set of parameters defining crystal behavior that best fit a set of independent macroscopic tests. The objectivity of the method and the uniqueness of solution as function of the input information has been numerically studied. The inverse optimization strategy was first used to obtain the crystal behavior of a rolled polycrystalline AZ31 Mg alloy that showed a marked basal texture and a strong plastic anisotropy. Four different deformation mechanisms: basal, prismatic and pyramidal hc+ai slip, together with tensile twinning were included to characterize the single crystal behavior. The validity of the resulting parameters was proved by the ability of the polycrystalline model to predict independent macroscopic tests on different directions. Secondly, the influence of Neodymium (Nd) content on an extruded polycrystalline Mg-Mn-Nd alloy was studied using the same homogenization and optimization framework. The effect of Nd addition was a progressive isotropization of the macroscopic behavior. The model showed that this increase in the macroscopic isotropy was due to a randomization of the initial texture and also to an increase of the crystal behavior isotropy (similar values of the CRSSs of the different modes). Finally, the model was used to analyze the effect of temperature on the crystal behaviour of a Mg-Mn-Nd alloy. The introduction in the model of non-Schmid effects on the pyramidal hc+ai slip allowed to capture the inverse strength differential that appeared, between the tension and compression, above 150_C. This is the first time, to the author's knowledge, that non-Schmid effects have been reported for Mg alloys.
Resumo:
The purpose of this Project is, first and foremost, to disclose the topic of nonlinear vibrations and oscillations in mechanical systems and, namely, nonlinear normal modes NNMs to a greater audience of researchers and technicians. To do so, first of all, the dynamical behavior and properties of nonlinear mechanical systems is outlined from the analysis of a pair of exemplary models with the harmonic balanced method. The conclusions drawn are contrasted with the Linear Vibration Theory. Then, it is argued how the nonlinear normal modes could, in spite of their limitations, predict the frequency response of a mechanical system. After discussing those introductory concepts, I present a Matlab package called 'NNMcont' developed by a group of researchers from the University of Liege. This package allows the analysis of nonlinear normal modes of vibration in a range of mechanical systems as extensions of the linear modes. This package relies on numerical methods and a 'continuation algorithm' for the computation of the nonlinear normal modes of a conservative mechanical system. In order to prove its functionality, a two degrees of freedom mechanical system with elastic nonlinearities is analized. This model comprises a mass suspended on a foundation by means of a spring-viscous damper mechanism -analogous to a very simplified model of most suspended structures and machines- that has attached a mass damper as a passive vibration control system. The results of the computation are displayed on frequency energy plots showing the NNMs branches along with modal curves and time-series plots for each normal mode. Finally, a critical analysis of the results obtained is carried out with an eye on devising what they can tell the researcher about the dynamical properties of the system.
Resumo:
The problem of creating solenoidal vortex elements to satisfy no-slip boundary conditions in Lagrangian numerical vortex methods is solved through the use of impulse elements at walls and their subsequent conversion to vortex loops. The algorithm is not uniquely defined, due to the gauge freedom in the definition of impulse; the numerically optimal choice of gauge remains to be determined. Two different choices are discussed, and an application to flow past a sphere is sketched.
Resumo:
In this paper, we propose a novel algorithm for the rigorous design of distillation columns that integrates a process simulator in a generalized disjunctive programming formulation. The optimal distillation column, or column sequence, is obtained by selecting, for each column section, among a set of column sections with different number of theoretical trays. The selection of thermodynamic models, properties estimation etc., are all in the simulation environment. All the numerical issues related to the convergence of distillation columns (or column sections) are also maintained in the simulation environment. The model is formulated as a Generalized Disjunctive Programming (GDP) problem and solved using the logic based outer approximation algorithm without MINLP reformulation. Some examples involving from a single column to thermally coupled sequence or extractive distillation shows the performance of the new algorithm.
Resumo:
We present a derivative-free optimization algorithm coupled with a chemical process simulator for the optimal design of individual and complex distillation processes using a rigorous tray-by-tray model. The proposed approach serves as an alternative tool to the various models based on nonlinear programming (NLP) or mixed-integer nonlinear programming (MINLP) . This is accomplished by combining the advantages of using a commercial process simulator (Aspen Hysys), including especially suited numerical methods developed for the convergence of distillation columns, with the benefits of the particle swarm optimization (PSO) metaheuristic algorithm, which does not require gradient information and has the ability to escape from local optima. Our method inherits the superstructure developed in Yeomans, H.; Grossmann, I. E.Optimal design of complex distillation columns using rigorous tray-by-tray disjunctive programming models. Ind. Eng. Chem. Res.2000, 39 (11), 4326–4335, in which the nonexisting trays are considered as simple bypasses of liquid and vapor flows. The implemented tool provides the optimal configuration of distillation column systems, which includes continuous and discrete variables, through the minimization of the total annual cost (TAC). The robustness and flexibility of the method is proven through the successful design and synthesis of three distillation systems of increasing complexity.
Resumo:
We propose and discuss a new centrality index for urban street patterns represented as networks in geographical space. This centrality measure, that we call ranking-betweenness centrality, combines the idea behind the random-walk betweenness centrality measure and the idea of ranking the nodes of a network produced by an adapted PageRank algorithm. We initially use a PageRank algorithm in which we are able to transform some information of the network that we want to analyze into numerical values. Numerical values summarizing the information are associated to each of the nodes by means of a data matrix. After running the adapted PageRank algorithm, a ranking of the nodes is obtained, according to their importance in the network. This classification is the starting point for applying an algorithm based on the random-walk betweenness centrality. A detailed example of a real urban street network is discussed in order to understand the process to evaluate the ranking-betweenness centrality proposed, performing some comparisons with other classical centrality measures.
Resumo:
We present an extension of the logic outer-approximation algorithm for dealing with disjunctive discrete-continuous optimal control problems whose dynamic behavior is modeled in terms of differential-algebraic equations. Although the proposed algorithm can be applied to a wide variety of discrete-continuous optimal control problems, we are mainly interested in problems where disjunctions are also present. Disjunctions are included to take into account only certain parts of the underlying model which become relevant under some processing conditions. By doing so the numerical robustness of the optimization algorithm improves since those parts of the model that are not active are discarded leading to a reduced size problem and avoiding potential model singularities. We test the proposed algorithm using three examples of different complex dynamic behavior. In all the case studies the number of iterations and the computational effort required to obtain the optimal solutions is modest and the solutions are relatively easy to find.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
A generic method for the estimation of parameters for Stochastic Ordinary Differential Equations (SODEs) is introduced and developed. This algorithm, called the GePERs method, utilises a genetic optimisation algorithm to minimise a stochastic objective function based on the Kolmogorov-Smirnov statistic. Numerical simulations are utilised to form the KS statistic. Further, the examination of some of the factors that improve the precision of the estimates is conducted. This method is used to estimate parameters of diffusion equations and jump-diffusion equations. It is also applied to the problem of model selection for the Queensland electricity market. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The stable similarity reduction of a nonsymmetric square matrix to tridiagonal form has been a long-standing problem in numerical linear algebra. The biorthogonal Lanczos process is in principle a candidate method for this task, but in practice it is confined to sparse matrices and is restarted periodically because roundoff errors affect its three-term recurrence scheme and degrade the biorthogonality after a few steps. This adds to its vulnerability to serious breakdowns or near-breakdowns, the handling of which involves recovery strategies such as the look-ahead technique, which needs a careful implementation to produce a block-tridiagonal form with unpredictable block sizes. Other candidate methods, geared generally towards full matrices, rely on elementary similarity transformations that are prone to numerical instabilities. Such concomitant difficulties have hampered finding a satisfactory solution to the problem for either sparse or full matrices. This study focuses primarily on full matrices. After outlining earlier tridiagonalization algorithms from within a general framework, we present a new elimination technique combining orthogonal similarity transformations that are stable. We also discuss heuristics to circumvent breakdowns. Applications of this study include eigenvalue calculation and the approximation of matrix functions.
Resumo:
Energy saving in mobile hydraulic machinery, aimed to fuel consumption reduction, has been one of the principal interests of many researchers and OEMs in the last years. Many different solutions have been proposed and investigated in the literature in order to improve the fuel efficiency, from novel system architectures and strategies to control the system to hybrid solutions. This thesis deals with the energy analysis of a hydraulic system of a middle size excavator through mathematical tools. In order to conduct the analyses the multibody mathematical model of the hydraulic excavator under investigation will be developed and validated on the basis of experimental activities, both on test bench and on the field. The analyses will be carried out considering the typical working cycles of the excavators defined by the JCMAS standard. The simulations results will be analysed and discussed in detail in order to define different solutions for the energy saving in LS hydraulic systems. Among the proposed energy saving solutions, energy recovery systems seem to be very promising for fuel consumption reduction in mobile machinery. In this thesis a novel energy recovery system architecture will be proposed and described in detail. Its dimensioning procedure takes advantage of the dynamic programming algorithm and a prototype will be realized and tested on the excavator under investigation. Finally the energy saving proposed solutions will be compared referring to the standard machinery architecture and a novel hybrid excavator with an energy saving up to 11% will be presented.
Resumo:
Purpose – This paper sets out to study a production-planning problem for printed circuit board (PCB) assembly. A PCB assembly company may have a number of assembly lines for production of several product types in large volume. Design/methodology/approach – Pure integer linear programming models are formulated for assigning the product types to assembly lines, which is the line assignment problem, with the objective of minimizing the total production cost. In this approach, unrealistic assignment, which was suffered by previous researchers, is avoided by incorporating several constraints into the model. In this paper, a genetic algorithm is developed to solve the line assignment problem. Findings – The procedure of the genetic algorithm to the problem and a numerical example for illustrating the models are provided. It is also proved that the algorithm is effective and efficient in dealing with the problem. Originality/value – This paper studies the line assignment problem arising in a PCB manufacturing company in which the production volume is high.
Resumo:
The generalised transportation problem (GTP) is an extension of the linear Hitchcock transportation problem. However, it does not have the unimodularity property, which means the linear programming solution (like the simplex method) cannot guarantee to be integer. This is a major difference between the GTP and the Hitchcock transportation problem. Although some special algorithms, such as the generalised stepping-stone method, have been developed, but they are based on the linear programming model and the integer solution requirement of the GTP is relaxed. This paper proposes a genetic algorithm (GA) to solve the GTP and a numerical example is presented to show the algorithm and its efficiency.
Resumo:
We investigate a digital back-propagation simplification method to enable computationally-efficient digital nonlinearity compensation for a coherently-detected 112 Gb/s polarization multiplexed quadrature phase shifted keying transmission over a 1,600 km link (20x80km) with no inline compensation. Through numerical simulation, we report up to 80% reduction in required back-propagation steps to perform nonlinear compensation, in comparison to the standard back-propagation algorithm. This method takes into account the correlation between adjacent symbols at a given instant using a weighted-average approach, and optimization of the position of nonlinear compensator stage to enable practical digital back-propagation.