69 resultados para Vector optimization
Resumo:
The Arctic region becoming very active area of the industrial developments since it may contain approximately 15-25% of the hydrocarbon and other valuable natural resources which are in great demand nowadays. Harsh operation conditions make the Arctic region difficult to access due to low temperatures which can drop below -50 °C in winter and various additional loads. As a result, newer and modified metallic materials are implemented which can cause certain problems in welding them properly. Steel is still the most widely used material in the Arctic regions due to high mechanical properties, cheapness and manufacturability. Moreover, with recent steel manufacturing development it is possible to make up to 1100 MPa yield strength microalloyed high strength steel which can be operated at temperatures -60 °C possessing reasonable weldability, ductility and suitable impact toughness which is the most crucial property for the Arctic usability. For many years, the arc welding was the most dominant joining method of the metallic materials. Recently, other joining methods are successfully implemented into welding manufacturing due to growing industrial demands and one of them is the laser-arc hybrid welding. The laser-arc hybrid welding successfully combines the advantages and eliminates the disadvantages of the both joining methods therefore produce less distortions, reduce the need of edge preparation, generates narrower heat-affected zone, and increase welding speed or productivity significantly. Moreover, due to easy implementation of the filler wire, accordingly the mechanical properties of the joints can be manipulated in order to produce suitable quality. Moreover, with laser-arc hybrid welding it is possible to achieve matching weld metal compared to the base material even with the low alloying welding wires without excessive softening of the HAZ in the high strength steels. As a result, the laser-arc welding methods can be the most desired and dominating welding technology nowadays, and which is already operating in automotive and shipbuilding industries with a great success. However, in the future it can be extended to offshore, pipe-laying, and heavy equipment industries for arctic environment. CO2 and Nd:YAG laser sources in combination with gas metal arc source have been used widely in the past two decades. Recently, the fiber laser sources offered high power outputs with excellent beam quality, very high electrical efficiency, low maintenance expenses, and higher mobility due to fiber optics. As a result, fiber laser-arc hybrid process offers even more extended advantages and applications. However, the information about fiber or disk laser-arc hybrid welding is very limited. The objectives of the Master’s thesis are concentrated on the study of fiber laser-MAG hybrid welding parameters in order to understand resulting mechanical properties and quality of the welds. In this work only ferrous materials are reviewed. The qualitative methodological approach has been used to achieve the objectives. This study demonstrates that laser-arc hybrid welding is suitable for welding of many types, thicknesses and strength of steels with acceptable mechanical properties along very high productivity. New developments of the fiber laser-arc hybrid process offers extended capabilities over CO2 laser combined with the arc. This work can be used as guideline in hybrid welding technology with comprehensive study the effect of welding parameter on joint quality.
Resumo:
The iron and steelmaking industry is among the major contributors to the anthropogenic emissions of carbon dioxide in the world. The rising levels of CO2 in the atmosphere and the global concern about the greenhouse effect and climate change have brought about considerable investigations on how to reduce the energy intensity and CO2 emissions of this industrial sector. In this thesis the problem is tackled by mathematical modeling and optimization using three different approaches. The possibility to use biomass in the integrated steel plant, particularly as an auxiliary reductant in the blast furnace, is investigated. By pre-processing the biomass its heating value and carbon content can be increased at the same time as the oxygen content is decreased. As the compression strength of the preprocessed biomass is lower than that of coke, it is not suitable for replacing a major part of the coke in the blast furnace burden. Therefore the biomass is assumed to be injected at the tuyere level of the blast furnace. Carbon capture and storage is, nowadays, mostly associated with power plants but it can also be used to reduce the CO2 emissions of an integrated steel plant. In the case of a blast furnace, the effect of CCS can be further increased by recycling the carbon dioxide stripped top gas back into the process. However, this affects the economy of the integrated steel plant, as the amount of top gases available, e.g., for power and heat production is decreased. High quality raw materials are a prerequisite for smooth blast furnace operation. High quality coal is especially needed to produce coke with sufficient properties to ensure proper gas permeability and smooth burden descent. Lower quality coals as well as natural gas, which some countries have in great volumes, can be utilized with various direct and smelting reduction processes. The DRI produced with a direct reduction process can be utilized as a feed material for blast furnace, basic oxygen furnace or electric arc furnace. The liquid hot metal from a smelting reduction process can in turn be used in basic oxygen furnace or electric arc furnace. The unit sizes and investment costs of an alternative ironmaking process are also lower than those of a blast furnace. In this study, the economy of an integrated steel plant is investigated by simulation and optimization. The studied system consists of linearly described unit processes from coke plant to steel making units, with a more detailed thermodynamical model of the blast furnace. The results from the blast furnace operation with biomass injection revealed the importance of proper pre-processing of the raw biomass as the composition of the biomass as well as the heating value and the yield are all affected by the pyrolysis temperature. As for recycling of CO2 stripped blast furnace top gas, substantial reductions in the emission rates are achieved if the stripped CO2 can be stored. However, the optimal recycling degree together with other operation conditions is heavily dependent on the cost structure of CO2 emissions and stripping/storage. The economical feasibility related to the use of DRI in the blast furnace depends on the price ratio between the DRI pellets and the BF pellets. The high amount of energy needed in the rotary hearth furnace to reduce the iron ore leads to increased CO2 emissions.
Resumo:
Många kvantitativa problem från vitt skilda områden kan beskrivas som optimeringsproblem. Ett mått på lösningens kvalitet bör optimeras samtidigt som vissa villkor på lösningen uppfylls. Kvalitetsmåttet kallas vanligen objektfunktion och kan beskriva kostnader (exempelvis produktion, logistik), potentialenergi (molekylmodellering, proteinveckning), risk (finans, försäkring) eller något annat relevant mått. I min doktorsavhandling diskuteras speciellt icke-linjär programmering, NLP, i ändliga dimensioner. Problem med enkel struktur, till exempel någon form av konvexitet, kan lösas effektivt. Tyvärr kan inte alla kvantitativa samband modelleras på ett konvext vis. Icke-konvexa problem kan angripas med heuristiska metoder, algoritmer som söker lösningar med hjälp av deterministiska eller stokastiska tumregler. Ibland fungerar det här väl, men heuristikerna kan sällan garantera kvaliteten på lösningen eller ens att en lösning påträffas. För vissa tillämpningar är det här oacceptabelt. Istället kan man tillämpa så kallad global optimering. Genom att successivt dela variabeldomänen i mindre delar och beräkna starkare gränser på det optimala värdet hittas en lösning inom feltoleransen. Den här metoden kallas branch-and-bound, ungefär dela-och-begränsa. För att ge undre gränser (vid minimering) approximeras problemet med enklare problem, till exempel konvexa, som kan lösas effektivt. I avhandlingen studeras tillvägagångssätt för att approximera differentierbara funktioner med konvexa underskattningar, speciellt den så kallade alphaBB-metoden. Denna metod adderar störningar av en viss form och garanterar konvexitet genom att sätta villkor på den perturberade Hessematrisen. Min forskning har lyft fram en naturlig utvidgning av de perturbationer som används i alphaBB. Nya metoder för att bestämma underskattningsparametrar har beskrivits och jämförts. I sammanfattningsdelen diskuteras global optimering ur bredare perspektiv på optimering och beräkningsalgoritmer.
Resumo:
This study examines the excess returns provided by G10 currency carry trading during the Euro era. The currency carry trade has been a popular trade throughout the past decades offering excess returns to investors. The thesis aims to contribute to existing research on the topic by utilizing a new set of data for the Euro era as well as using the Euro as a basis for the study. The focus of the thesis is specifically on different carry trade strategies’ performance, risk and diversification benefits. The study finds proof of the failure of the uncovered interest rate parity theory through multiple regression analyses. Furthermore, the research finds evidence of significant diversification benefits in terms of Sharpe ratio and improved return distributions. The results suggest that currency carry trades have offered excess returns during 1999-2014 and that volatility plays an important role in carry trade returns. The risk, however, is diversifiable and therefore our results support previous quantitative research findings on the topic.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The objective of this thesis is to examine distribution network designs and modeling practices and create a framework to identify best possible distribution network structure for the case company. The main research question therefore is: How to optimize case company’s distribution network in terms of customer needs and costs? Theory chapters introduce the basic building blocks of the distribution network design and needed calculation methods and models. Framework for the distribution network projects was created based on the theory and the case study was carried out by following the defined framework. Distribution network calculations were based on the company’s sales plan for the years 2014 - 2020. Main conclusions and recommendations were that the new Asian business strategy requires high investments in logistics and the first step is to open new satellite DC in China as soon as possible to support sales and second possible step is to open regional DC in Asia within 2 - 4 years.
Resumo:
Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
Traditionally real estate has been seen as a good diversification tool for a stock portfolio due to the lower return and volatility characteristics of real estate investments. However, the diversification benefits of a multi-asset portfolio depend on how the different asset classes co-move in the short- and long-run. As the asset classes are affected by the same macroeconomic factors, interrelationships limiting the diversification benefits could exist. This master’s thesis aims to identify such dynamic linkages in the Finnish real estate and stock markets. The results are beneficial for portfolio optimization tasks as well as for policy-making. The real estate industry can be divided into direct and securitized markets. In this thesis the direct market is depicted by the Finnish housing market index. The securitized market is proxied by the Finnish all-sectors securitized real estate index and by a European residential Real Estate Investment Trust index. The stock market is depicted by OMX Helsinki Cap index. Several macroeconomic variables are incorporated as well. The methodology of this thesis is based on the Vector Autoregressive (VAR) models. The long-run dynamic linkages are studied with Johansen’s cointegration tests and the short-run interrelationships are examined with Granger-causality tests. In addition, impulse response functions and forecast error variance decomposition analyses are used for robustness checks. The results show that long-run co-movement, or cointegration, did not exist between the housing and stock markets during the sample period. This indicates diversification benefits in the long-run. However, cointegration between the stock and securitized real estate markets was identified. This indicates limited diversification benefits and shows that the listed real estate market in Finland is not matured enough to be considered a separate market from the general stock market. Moreover, while securitized real estate was shown to cointegrate with the housing market in the long-run, the two markets are still too different in their characteristics to be used as substitutes in a multi-asset portfolio. This implies that the capital intensiveness of housing investments cannot be circumvented by investing in securitized real estate.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
The purpose of this Thesis is to find the most optimal heat recovery solution for Wärtsilä’s dynamic district heating power plant considering Germany energy markets as in Germany government pays subsidies for CHP plants in order to increase its share of domestic power production to 25 % by 2020. Different heat recovery connections have been simulated dozens to be able to determine the most efficient heat recovery connections. The purpose is also to study feasibility of different heat recovery connections in the dynamic district heating power plant in the Germany markets thus taking into consideration the day ahead electricity prices, district heating network temperatures and CHP subsidies accordingly. The auxiliary cooling, dynamical operation and cost efficiency of the power plant is also investigated.