24 resultados para abstract optimization problems
Resumo:
The goal of the Master’s thesis is to develop and to analyze the optimization method for finding a geometry shape of classical horizontal wind turbine blades based on set of criteria. The thesis develops a technique that allows the designer to determine the weight of such factors as power coefficient, sound pressure level and the cost function in the overall process of blade shape optimization. The optimization technique applies the Desirability function. It was never used before in that kind of technical problems, and in this sense it can claim to originality of research. To do the analysis and the optimization processes more convenient the software application was developed.
Resumo:
This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.
Resumo:
This work contains a series of studies on the optimization of three real-world scheduling problems, school timetabling, sports scheduling and staff scheduling. These challenging problems are solved to customer satisfaction using the proposed PEAST algorithm. The customer satisfaction refers to the fact that implementations of the algorithm are in industry use. The PEAST algorithm is a product of long-term research and development. The first version of it was introduced in 1998. This thesis is a result of a five-year development of the algorithm. One of the most valuable characteristics of the algorithm has proven to be the ability to solve a wide range of scheduling problems. It is likely that it can be tuned to tackle also a range of other combinatorial problems. The algorithm uses features from numerous different metaheuristics which is the main reason for its success. In addition, the implementation of the algorithm is fast enough for real-world use.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
Optimointi on tavallinen toimenpide esimerkiksi prosessin muuttamisen tai uusimisen jälkeen. Optimoinnilla pyritään etsimään vaikkapa tiettyjen laatuominaisuuksien kannalta paras tapa ajaa prosessia tai erinäisiä prosessin osia. Tämän työn tarkoituksena oli investoinnin jälkeen optimoida neljä muuttujaa, erään runkoon menevän massan jauhatus ja määrä, märkäpuristus sekä spray –tärkin määrä, kolmen laatuominaisuuden, palstautumislujuuden, geometrisen taivutusjäykkyyden ja sileyden, suhteen. Työtä varten tehtiin viisi tehdasmittakaavaista koeajoa. Ensimmäisessä koeajossa oli tarkoitus lisätä vettä tai spray –tärkkiä kolmikerroskartongin toiseen kerrosten rajapintaan, toisessa koeajossa muutettiin, jo aiemmin mainitun runkoon menevän massan jauhatusta ja jauhinkombinaatioita. Ensimmäisessä koeajossa tutkittiin palstautumislujuuden, toisessa koeajossa muiden lujuusominaisuuksien kehittymistä. Kolmannessa koeajossa tutkittiin erään runkoon menevän massan jauhatuksen ja määrän sekä kenkäpuristimen viivapaineen muutoksen vaikutusta palstautumislujuuteen, geometriseen taivutusjäykkyyteen sekä sileyteen. Neljännessä koeajossa yritettiin toistaa edellisen koeajon paras piste ja parametreja hieman muuttamalla saada aikaan vieläkin paremmat laatuominaisuudet. Myös tässä kokeessa tutkittiin muuttujien vaikutusta palstautumislujuuteen, geometriseen taivutusjäykkyyteen ja sileyteen. Viimeisen kokeen tarkoituksena oli tutkia samaisen runkoon menevän massan vähentämisen vaikutusta palstautumislujuuteen. Erinäisistä vastoinkäymisistä johtuen, koeajoista saadut tulokset jäivät melko laihoiksi. Kokeista kävi kuitenkin ilmi, että lujuusominaisuudet eivät parantuneet, vaikka jauhatusta jatkettiin. Lujuusominaisuuksien kehittymisen kannalta turha jauhatus pystyttiin siis jättämään pois ja näin säästämään energiaa sekä säästymään pitkälle viedyn jauhatuksen mahdollisesti aiheuttamilta muilta ongelmilta. Vähemmällä jauhatuksella ominaissärmäkuorma saatiin myös pidettyä alle tehtaalla halutun tason. Puuttuvat lujuusominaisuudet täytyy saavuttaa muilla keinoin.
Resumo:
The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.
Resumo:
The Arctic region becoming very active area of the industrial developments since it may contain approximately 15-25% of the hydrocarbon and other valuable natural resources which are in great demand nowadays. Harsh operation conditions make the Arctic region difficult to access due to low temperatures which can drop below -50 °C in winter and various additional loads. As a result, newer and modified metallic materials are implemented which can cause certain problems in welding them properly. Steel is still the most widely used material in the Arctic regions due to high mechanical properties, cheapness and manufacturability. Moreover, with recent steel manufacturing development it is possible to make up to 1100 MPa yield strength microalloyed high strength steel which can be operated at temperatures -60 °C possessing reasonable weldability, ductility and suitable impact toughness which is the most crucial property for the Arctic usability. For many years, the arc welding was the most dominant joining method of the metallic materials. Recently, other joining methods are successfully implemented into welding manufacturing due to growing industrial demands and one of them is the laser-arc hybrid welding. The laser-arc hybrid welding successfully combines the advantages and eliminates the disadvantages of the both joining methods therefore produce less distortions, reduce the need of edge preparation, generates narrower heat-affected zone, and increase welding speed or productivity significantly. Moreover, due to easy implementation of the filler wire, accordingly the mechanical properties of the joints can be manipulated in order to produce suitable quality. Moreover, with laser-arc hybrid welding it is possible to achieve matching weld metal compared to the base material even with the low alloying welding wires without excessive softening of the HAZ in the high strength steels. As a result, the laser-arc welding methods can be the most desired and dominating welding technology nowadays, and which is already operating in automotive and shipbuilding industries with a great success. However, in the future it can be extended to offshore, pipe-laying, and heavy equipment industries for arctic environment. CO2 and Nd:YAG laser sources in combination with gas metal arc source have been used widely in the past two decades. Recently, the fiber laser sources offered high power outputs with excellent beam quality, very high electrical efficiency, low maintenance expenses, and higher mobility due to fiber optics. As a result, fiber laser-arc hybrid process offers even more extended advantages and applications. However, the information about fiber or disk laser-arc hybrid welding is very limited. The objectives of the Master’s thesis are concentrated on the study of fiber laser-MAG hybrid welding parameters in order to understand resulting mechanical properties and quality of the welds. In this work only ferrous materials are reviewed. The qualitative methodological approach has been used to achieve the objectives. This study demonstrates that laser-arc hybrid welding is suitable for welding of many types, thicknesses and strength of steels with acceptable mechanical properties along very high productivity. New developments of the fiber laser-arc hybrid process offers extended capabilities over CO2 laser combined with the arc. This work can be used as guideline in hybrid welding technology with comprehensive study the effect of welding parameter on joint quality.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.