52 resultados para Multi-objective optimization techniques
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The objective of this thesis is to examine distribution network designs and modeling practices and create a framework to identify best possible distribution network structure for the case company. The main research question therefore is: How to optimize case company’s distribution network in terms of customer needs and costs? Theory chapters introduce the basic building blocks of the distribution network design and needed calculation methods and models. Framework for the distribution network projects was created based on the theory and the case study was carried out by following the defined framework. Distribution network calculations were based on the company’s sales plan for the years 2014 - 2020. Main conclusions and recommendations were that the new Asian business strategy requires high investments in logistics and the first step is to open new satellite DC in China as soon as possible to support sales and second possible step is to open regional DC in Asia within 2 - 4 years.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Efficient production and consumption of energy has become the top priority of national and international policies around the world. Manufacturing industries have to address the requirements of the government in relation to energy saving and ecologically sustainable products. These industries are also concerned with energy and material usage due to their rising costs. Therefore industries have to find solutions that can support environmental preservation yet maintain competitiveness in the market. Welding, a major manufacturing process, consumes a great deal of material and energy. It is a crucial process in improving a product’s life-cycle cost, strength, quality and reliability. Factors which lead to weld related inefficiencies have to be effectively managed, if industries are to meet their quality requirements and fulfil a high-volume production demand. Therefore it is important to consider some practical strategies in welding process for optimization of energy and material consumption. The main objective of this thesis is to explore the methods of minimizing the ecological footprint of the welding process and methods to effectively manage its material and energy usage in the welding process. The author has performed a critical review of the factors including improved weld power source efficiency, efficient weld techniques, newly developed weld materials, intelligent welding systems, weld safety measures and personnel training. The study lends strong support to the fact that the use of eco-friendly welding units and the quality weld joints obtained with minimum possible consumption of energy and materials should be the main directions of improvement in welding systems. The study concludes that, gradually implementing the practical strategies mentioned in this thesis would help the manufacturing industries to achieve on the following - reduced power consumption, enhanced power control and manipulation, increased deposition rate, reduced cycle time, reduced joint preparation time, reduced heat affected zones, reduced repair rates, improved joint properties, reduced post-weld operations, improved automation, improved sensing and control, avoiding hazardous conditions and reduced exposure of welder to potential hazards. These improvement can help in promotion of welding as a green manufacturing process.
Resumo:
Case company utilizes multi-branding strategy (or house of brands strategy) in its product portfolio. In practice the company has multiple brands – one main brand and four acquired brands – which all utilize one single product platform. The objective of this research is to analyze case company’s multi-branding strategy and its benefits and challenges. Moreover, the purpose is to clarify that how could a company in B2B markets utilize multi-branding strategy more efficiently and profitably. The theoretical part of this thesis consists of aspects of branding strategies; different brand name architectures, benefits and challenges of different strategies and different ways of utilize branding strategies in mergers and acquisitions. The empirical part, on the other hand, includes the description of the case company’s branding strategy and the employees’ perspective on the benefits and challenges of multi-branding strategy, and how to utilize it more efficiently and profitably. This study shows, that the major benefits of utilizing multi-branding are lower production costs, ability to reach wider market coverage, possibility to utilize common sales tools, synergies in R&D and shared resources. On the other hand, the major challenges are lack of product differentiation, internal competition, branding issues in production and deliveries, pricing issues and conflicts, and compromises in product compatibility and suitability. Based on the results, several ways to utilize multi-branding strategy more efficiently and profitably were found; by putting more effort on brand image and product differentiation, by having more co-operation among the brands and by focusing on more precise customer and market segmentation.
Resumo:
Liberalization of electricity markets has resulted in a competed Nordic electricity market, in which electricity retailers play a key role as electricity suppliers, market intermediaries, and service providers. Although these roles may remain unchanged in the near future, the retailers’ operation may change fundamentally as a result of the emerging smart grid environment. Especially the increasing amount of distributed energy resources (DER), and improving opportunities for their control, are reshaping the operating environment of the retailers. This requires that the retailers’ operation models are developed to match the operating environment, in which the active use of DER plays a major role. Electricity retailers have a clientele, and they operate actively in the electricity markets, which makes them a natural market party to offer new services for end-users aiming at an efficient and market-based use of DER. From the retailer’s point of view, the active use of DER can provide means to adapt the operation to meet the challenges posed by the smart grid environment, and to pursue the ultimate objective of the retailer, which is to maximize the profit of operation. This doctoral dissertation introduces a methodology for the comprehensive use of DER in an electricity retailer’s short-term profit optimization that covers operation in a variety of marketplaces including day-ahead, intra-day, and reserve markets. The analysis results provide data of the key profit-making opportunities and the risks associated with different types of DER use. Therefore, the methodology may serve as an efficient tool for an experienced operator in the planning of the optimal market-based DER use. The key contributions of this doctoral dissertation lie in the analysis and development of the model that allows the retailer to benefit from profit-making opportunities brought by the use of DER in different marketplaces, but also to manage the major risks involved in the active use of DER. In addition, the dissertation introduces an analysis of the economic potential of DER control actions in different marketplaces including the day-ahead Elspot market, balancing power market, and the hourly market of Frequency Containment Reserve for Disturbances (FCR-D).