46 resultados para approximate KNN query


Relevância:

10.00% 10.00%

Publicador:

Resumo:

By an exponential sum of the Fourier coefficients of a holomorphic cusp form we mean the sum which is formed by first taking the Fourier series of the said form,then cutting the beginning and the tail away and considering the remaining sum on the real axis. For simplicity’s sake, typically the coefficients are normalized. However, this isn’t so important as the normalization can be done and removed simply by using partial summation. We improve the approximate functional equation for the exponential sums of the Fourier coefficients of the holomorphic cusp forms by giving an explicit upper bound for the error term appearing in the equation. The approximate functional equation is originally due to Jutila [9] and a crucial tool for transforming sums into shorter sums. This transformation changes the point of the real axis on which the sum is to be considered. We also improve known upper bounds for the size estimates of the exponential sums. For very short sums we do not obtain any better estimates than the very easy estimate obtained by multiplying the upper bound estimate for a Fourier coefficient (they are bounded by the divisor function as Deligne [2] showed) by the number of coefficients. This estimate is extremely rough as no possible cancellation is taken into account. However, with small sums, it is unclear whether there happens any remarkable amounts of cancellation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Varastotuotannon keskeinen osa on keräystyö, joka edustaa joissain tapauksissa jopa 50 % logistiikan kustannuksista. Jo pienelläkin tehostamisella päästään huomattaviin säästöihin. Keräystyö on kuitenkin yksinkertaisesta perusajatuksesta huolimatta usein monimutkaisen prosessin osa, joka on vahvasti riippuvainen tietojärjestelmistä ja toimintamalleista. Tässä diplomityössä rakennetaan kaupan alan logistiikkapalvelujen tuottajalle simulaatio-ohjelmisto, jolla on mahdollista tutkia tietojärjestelmän tiettyjen parametrien ja varaston layoutin ja artikkelinsijoittelun vaikutuksia keräystyöhön. Työssä tarkastellaan myös tilaajayrityksen varaston uudelleen organisoinnin yhteydessä tehtyjen simulaatioiden tuloksia ja verrataan niitä käytännön toteutuneeseen tilanteeseen. Varaston uudelleen organisoinnin yhteydessä tehdyt simulaatiot osoittavat, että simulaatio-ohjelmistolla voidaan simuloida tilaajayrityksen varaston keräyseriä ja -lenkkejä kohtalaisen tarkasti, kun varaston artikkeleja siirretään keräysalueelta toiselle. Simulaattorilla oli mahdollista arvioida keräyserien tunnuslukujen muutosta ja keräysmatkojen muutoksia. Työn tilaajayrityksessä tehtiin simulaation tuottamien keräyserän rakenteiden ja suoritteiden muutosten pohjalta päätös kahden eri artikkelisijoittelu vaihtoehdon välillä. Muutoksen jälkeen mitatut tulokset osoittautuivat keräysaluekohtaisesti hyvin samansuuntaisiksi, kuin simulaattorin tuottamat tulokset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis introduces a search for a new design of the frame for a permanent magnet generator mounted at a windmill. The objective of this work is to offer new design ideas for the stator frame - new concepts for connecting stator core to stator frame in a generator. Desired aims of new design concepts are: simplification of the structure production; decrease of material use; use of standard components; light weight of construction and etc. Thesis contains several new possible designs for the stator frame structure. Also, it has a list of possible connection concepts, which can be used to join the stator to the frame. All new ideas are described and compared according to its match to the desired purposes of the work. New design concepts are modeled using modern software. The main part of the Thesis contains several approximate computer models of the current and new offered constructions, description of loads and stress in the current stator frame. It has evaluation of the most important stress and load characteristics. The final design is a result of all previous research. It has a description of a new frame structure and joining concept for it. This structure matched main aims of work, but it does not have detailed design with dimensions and check calculations of the frame and welds. Thesis gives representation about design search, evaluation and comparison of new concepts of generator structure. Also, it gives general representation of renewable energy technology, knowledge about windmill turbines and its contents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deflection compensation of flexible boom structures in robot positioning is usually done using tables containing the magnitude of the deflection with inverse kinematics solutions of a rigid structure. The number of table values increases greatly if the working area of the boom is large and the required positioning accuracy is high. The inverse kinematics problems are very nonlinear, and if the structure is redundant, in some cases it cannot be solved in a closed form. If the structural flexibility of the manipulator arms is taken into account, the problem is almost impossible to solve using analytical methods. Neural networks offer a possibility to approximate any linear or nonlinear function. This study presents four different methods of using neural networks in the static deflection compensation and inverse kinematics solution of a flexible hydraulically driven manipulator. The training information required for training neural networks is obtained by employing a simulation model that includes elasticity characteristics. The functionality of the presented methods is tested based on the simulated and measured results of positioning accuracy. The simulated positioning accuracy is tested in 25 separate coordinate points. For each point, the positioning is tested with five different mass loads. The mean positioning error of a manipulator decreased from 31.9 mm to 4.1 mm in the test points. This accuracy enables the use of flexible manipulators in the positioning of larger objects. The measured positioning accuracy is tested in 9 separate points using three different mass loads. The mean positioning error decreased from 10.6 mm to 4.7 mm and the maximum error from 27.5 mm to 11.0 mm.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pumppukäytöt vastaavat noin neljännestä Euroopan alueen sähkömoottoreissa kuluvasta energiasta. Energian hinnan nousun vuoksi energian säästäminen ja energiatehokkuus ovat nousseet tärkeään asemaan paljon energiaa kuluttavassa teollisuudessa. Pumppukäyttöjen hyötysuhteen parantaminen on noussut olennaiseen osaan paperi- ja kartonkiteollisuuden energiatehokkuustarkasteluissa. Tässä työssä tarkastellaan kartonkikoneen pumppukäyttöjen toiminnan energiatehokkuutta moottorin virtamittausten perusteella. Analyysi perustuu moottorin akselitehon määrittämiseen ja sen perusteella tehtävään pumpun toimintapisteen laskentaan. Työssä esitellään käytetyt estimointimenetelmät ja niillä saadut tulokset kartonkikoneen pumppukäytöille. Lisäksi työssä arvioidaan kolmen yksittäisen pumppukäytön energiankulutuksen säästöpotentiaalia. Työssä käytettyä menetelmää voidaan käyttää sekä vakio- että vaihtonopeuspumppukäyttöjen toiminnan ja hyötysuhteen analysointiin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Talvivaaran Kaivososakeyhtiö Oyj:n tavoitteena on julkaista yhtiön ensimmäinen yhteiskuntavastuuraportti vuonna 2011. Vastuuraportin tavoitteena on täyttää GRI (Global Reporting Initiative) -ohjeiston C-tason raportointivaatimukset. Diplomityö liittyy olennaisena osana Talvivaaran yhteiskuntavastuuraportoinnin kehittämiseen. Diplomityön tavoitteena oli määrittää Talvivaaran ensimmäiseen raporttiin soveltuvat GRI-ohjeiston mukaiset mittarit. Työssä tarkastellaan Talvivaaran vuosikertomusta 2009. Työssä selvitettiin kuinka raporttia tulisi täydentää, jotta se täyttäisi GRI:n perussisällön C-tason vaatimukset. Näiden lisäksi työssä tehtiin sidosryhmäkartoitus, jossa selvitettiin yhtiön näkemys sidosryhmien odotuksista. Tulevaan vastuuraporttiin suunniteltujen mittareiden valintaan vaikutti sidosryhmien kiinnostuksen lisäksi se, kuinka olennaisia mittarit ovat Talvivaaran toiminnan kannalta. Valittujen mittareiden osalta yhtiön tuleva vastuuraportti täyttää selvästi C-tason raportointivaatimukset. Työssä annetaan ehdotus jatkotoimenpiteistä, joilla viestintää voidaan edelleen kehittää.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diplomityössä tarkastellaan Loviisan ydinvoimalaitoksen todennäköisyyspohjaisen riskianalyysin tason 2 epävarmuuksia. Tason 2 riskitutkimuksissa tutkitaan ydinvoimalaitosonnettomuuksia, joiden seurauksena osa reaktorin radioaktiivisista aineista vapautuu ympäristöön. Näiden tutkimuksien päätulos on suuren päästön vuotuinen taajuus ja se on pääosin todelliseen laitoshistoriaan perustuva tilastollinen odotusarvo. Tämän odotusarvon uskottavuutta voidaan parantaa huomioimalla merkittävimmät laskentaan liittyvät epävarmuudet. Epävarmuuksia laskentaan aiheutuu muiden muassa vakavan reaktorionnettomuuden ilmiöistä, turvallisuusjärjestelmien laitteista, inhimillisistä toiminnoista sekä luotettavuusmallin määrittelemättömistä osista. Diplomityössä kuvataan, kuinka epävarmuustarkastelut integroidaan osaksi Loviisan ydinvoimalaitoksen todennäköisyyspohjaisia riskianalyysejä. Tämä toteutetaan diplomityössä kehitetyillä apuohjelmilla PRALA:lla ja PRATU:lla, joiden avulla voidaan lisätä laitoshistorian perusteella muodostetut epävarmuusparametrit osaksi riskianalyysien luotettavuusdataa. Lisäksi diplomityössä on laskettu laskentaesimerkkinä Loviisan ydinvoimalaitoksen suuren päästön vuotuisen taajuuden vaihtelua kuvaava luottamusväli. Tämä laskentaesimerkki pohjautuu pääosin konservatiivisiin epävarmuusarvioihin, ei todellisiin tilastollisiin epävarmuuksiin. Laskentaesimerkin tulosten perusteella Loviisan suuren päästön taajuudella on laaja vaihteluväli; virhekertoimeksi saatiin 8,4 nykyisillä epävarmuusparametreilla. Suuren päästön taajuuden luottamusväliä voidaan kuitenkin tulevaisuudessa supistaa, kun hyödynnetään todelliseen laitoshistoriaan perustuvia epävarmuusparametreja.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämän diplomityön tarkoituksena oli löytää kehityskohteita Fortumin Loviisan ydinvoimalaitoksen konventionaalisesta, eli tavanomaisesta, jätehuollosta. Tavoitteena oli löytää erityisesti keinoja kaatopaikkajätteen määrän vähentämiseksi sekä lajittelun tehostamiseksi. Myös jätelainsäädännön kokonaisuudistuksen vaikutukset jätehuollon toimintaan olivat työn kannalta keskeisessä roolissa. Työ tehtiin jätehuoltosuunnitelman rakennetta noudattaen. Jätehuoltosuunnitelma koostuu alkukartoituksesta sekä jätehuoltosuunnitelman laatimisesta ja toteutuksesta. Varsinaisina kehitystarpeiden kartoittamismenetelminä käytettiin viranomaisvaatimusten selvittämistä, toiminnan tarkastelua, jätehuoltokyselyä voimalaitoksen työntekijöille, benchmarkingia sekä valittujen hyötykäyttö- ja loppusijoitusmenetelmien kustannusvertailua. Tulokseksi saatiin, että jätteiden lajittelua voitaisiin tehostaa ennen kaikkea lisäämällä työntekijöiden koulutusta. Lajittelun helpottamiseksi ohjeistuksen tulee olla paremmin saatavilla sekä voimalaitoksen omalle henkilöstölle kuin urakoitsijoillekin. Ongelmajätehuollossa eniten ongelmia ilmeni ongelmajätepakkausten merkitsemisessä jätteiden syntypaikoilla. Tähän ratkaisuna ehdotettiin kokeiltavaksi jätteiden syntykohteisiin sijoitettavia jätekortteja, joista pakkaajat voisivat helposti tarkistaa tarvittavat merkinnät. Myös mustan jäteöljyn keräämistä olisi mahdollista parantaa, jotta suurempi osa siitä saataisiin hyödynnettyä materiaalina. Kaatopaikkajätteen määrän vähentämiseksi työssä ehdotettiin sekajätteen viemistä kaatopaikan sijaan poltettavaksi. Muutoksen seurauksena voimalaitoksen jätehuollon kustannukset saattavat lisääntyä, mutta ympäristön kannalta muutos tulisi olemaan positiivinen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of the pilotage effectiveness study was to come up with a process descrip-tion of the pilotage procedure, to design performance indicators based on this process description, to be used by Finnpilot, and to work out a preliminary plan for the imple-mentation of the indicators within the Finnpilot organisation. The theoretical aspects of pilotage as well as the guidelines and standards used were determined through a literature review. Based on the literature review, a process flow model with the following phases was created: the planning of pilotage, the start of pilo-tage, the act of pilotage, the end of pilotage and the closing of pilotage. The model based on the literature review was tested through interviews and observation of pilotage. At the same time an e-mail survey directed at foreign pilotage organisations, which included a questionnaire concerning their standards and management systems, operations procedures, measurement tools and their attitude to the passage planning, was conducted. The main issues in the observations and interviews were the passage plan and the bridge team co-operation. The phases of the pilotage process model emerged in both the pilotage activities and the interviews whereas bridge team co-operation was relatively marginal. Most of the pilotage organisations, who responded to the query, also use some standard-based management system. All organisations who answered the survey use some sort of a pilotage process model. According to the query, the main measuring tools for pilotage are statistical information concerning pilotage and the organisations, the customer feedback surveys, and financial results. Attitudes to-wards passage planning were mostly positive among the organisations. A workshop with pilotage experts was arranged where the process model constructed on the basis of the literature review was tuned to match practical pilotage. In the workshop it was determined that certain phases and the corresponding tasks, through which pilo-tage can be described as a process, were identifiable in all pilotage. The result of the workshop was a complemented process model, which separates incoming and outgoing traffic, as well as the fairway pilotage and harbour pilotage from each other. Addition-ally indicators divided according to the data gathering method were defined. Data con-cerning safety and traffic flow is gathered in the form of customer feedback. The pilot's own perceptions of the pilotage process are gathered through self-assessment. The measurement data which is connected to the phases of the pilotage process is generated e.g. by gathering statistics of the success of the pilot dispatches, the accuracy of the pi-lotage and the incidents that occurred during the pilotage, near misses, deviations and accidents. The measurement data is collected via the PilotWeb at the closing of the pilo-tage. A separate project and a project group with pilots also participating will be established for the deployment of the performance indicators. The phases of the project are: the definition phase, the implementation phase and the deployment phase. The purpose of the definition phase is to prepare questions for ship commanders concerning the cus-tomer feedback questionnaire and also to work out the self-assessment queries and the queries concerning the process indicators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this thesis work is to develop and study the Differential Evolution Algorithm for multi-objective optimization with constraints. Differential Evolution is an evolutionary algorithm that has gained in popularity because of its simplicity and good observed performance. Multi-objective evolutionary algorithms have become popular since they are able to produce a set of compromise solutions during the search process to approximate the Pareto-optimal front. The starting point for this thesis was an idea how Differential Evolution, with simple changes, could be extended for optimization with multiple constraints and objectives. This approach is implemented, experimentally studied, and further developed in the work. Development and study concentrates on the multi-objective optimization aspect. The main outcomes of the work are versions of a method called Generalized Differential Evolution. The versions aim to improve the performance of the method in multi-objective optimization. A diversity preservation technique that is effective and efficient compared to previous diversity preservation techniques is developed. The thesis also studies the influence of control parameters of Differential Evolution in multi-objective optimization. Proposals for initial control parameter value selection are given. Overall, the work contributes to the diversity preservation of solutions in multi-objective optimization.