168 resultados para multiobjective
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
This paper examines the ability of the doubly fed induction generator (DFIG) to deliver multiple reactive power objectives during variable wind conditions. The reactive power requirement is decomposed based on various control objectives (e.g. power factor control, voltage control, loss minimisation, and flicker mitigation) defined around different time frames (i.e. seconds, minutes, and hourly), and the control reference is generated by aggregating the individual reactive power requirement for each control strategy. A novel coordinated controller is implemented for the rotor-side converter and the grid-side converter considering their capability curves and illustrating that it can effectively utilise the aggregated DFIG reactive power capability for system performance enhancement. The performance of the multi-objective strategy is examined for a range of wind and network conditions, and it is shown that for the majority of the scenarios, more than 92% of the main control objective can be achieved while introducing the integrated flicker control scheme with the main reactive power control scheme. Therefore, optimal control coordination across the different control strategies can maximise the availability of ancillary services from DFIG-based wind farms without additional dynamic reactive power devices being installed in power networks.
Resumo:
A novel approach for the multi-objective design optimisation of aerofoil profiles is presented. The proposed method aims to exploit the relative strengths of global and local optimisation algorithms, whilst using surrogate models to limit the number of computationally expensive CFD simulations required. The local search stage utilises a re-parameterisation scheme that increases the flexibility of the geometry description by iteratively increasing the number of design variables, enabling superior designs to be generated with minimal user intervention. Capability of the algorithm is demonstrated via the conceptual design of aerofoil sections for use on a lightweight laminar flow business jet. The design case is formulated to account for take-off performance while reducing sensitivity to leading edge contamination. The algorithm successfully manipulates boundary layer transition location to provide a potential set of aerofoils that represent the trade-offs between drag at cruise and climb conditions in the presence of a challenging constraint set. Variations in the underlying flow physics between Pareto-optimal aerofoils are examined to aid understanding of the mechanisms that drive the trade-offs in objective functions.
Resumo:
A optimização estrutural é uma temática antiga em engenharia. No entanto, com o crescimento do método dos elementos finitos em décadas recentes, dá origem a um crescente número de aplicações. A optimização topológica, especificamente, surge associada a uma fase de definição de domínio efectivo de um processo global de optimização estrutural. Com base neste tipo de optimização, é possível obter a distribuição óptima de material para diversas aplicações e solicitações. Os materiais compósitos e alguns materiais celulares, em particular, encontram-se entre os materiais mais proeminentes dos nossos dias, em termos das suas aplicações e de investigação e desenvolvimento. No entanto, a sua estrutura potencialmente complexa e natureza heterogénea acarretam grandes complexidades, tanto ao nível da previsão das suas propriedades constitutivas quanto na obtenção das distribuições óptimas de constituintes. Procedimentos de homogeneização podem fornecer algumas respostas em ambos os casos. Em particular, a homogeneização por expansão assimptótica pode ser utilizada para determinar propriedades termomecânicas efectivas e globais a partir de volumes representativos, de forma flexível e independente da distribuição de constituintes. Além disso, integra processos de localização e fornece informação detalhada acerca de sensibilidades locais em metodologias de optimização multiescala. A conjugação destas áreas pode conduzir a metodologias de optimização topológica multiescala, nas quais de procede à obtenção não só de estruturas óptimas mas também das distribuições ideais de materiais constituintes. Os problemas associados a estas abordagens tendem, no entanto, a exigir recursos computacionais assinaláveis, criando muitas vezes sérias limitações à exequibilidade da sua resolução. Neste sentido, técnicas de cálculo paralelo e distribuído apresentam-se como uma potencial solução. Ao dividir os problemas por diferentes unidades memória e de processamento, é possível abordar problemas que, de outra forma, seriam proibitivos. O principal foco deste trabalho centra-se na importância do desenvolvimento de procedimentos computacionais para as aplicações referidas. Adicionalmente, estas conduzem a diversas abordagens alternativas na procura simultânea de estruturas e materiais para responder a aplicações termomecânicas. Face ao exposto, tudo isto é integrado numa plataforma computacional de optimização multiobjectivo multiescala em termoelasticidade, desenvolvida e implementada ao longo deste trabalho. Adicionalmente, o trabalho é complementado com a montagem e configuração de um cluster do tipo Beowulf, assim como com o desenvolvimento do código com vista ao cálculo paralelo e distribuído.
Resumo:
Un système, décrit avec un grand nombre d'éléments fortement interdépendants, est complexe, difficile à comprendre et à maintenir. Ainsi, une application orientée objet est souvent complexe, car elle contient des centaines de classes avec de nombreuses dépendances plus ou moins explicites. Une même application, utilisant le paradigme composant, contiendrait un plus petit nombre d'éléments, faiblement couplés entre eux et avec des interdépendances clairement définies. Ceci est dû au fait que le paradigme composant fournit une bonne représentation de haut niveau des systèmes complexes. Ainsi, ce paradigme peut être utilisé comme "espace de projection" des systèmes orientés objets. Une telle projection peut faciliter l'étape de compréhension d'un système, un pré-requis nécessaire avant toute activité de maintenance et/ou d'évolution. De plus, il est possible d'utiliser cette représentation, comme un modèle pour effectuer une restructuration complète d'une application orientée objets opérationnelle vers une application équivalente à base de composants tout aussi opérationnelle. Ainsi, La nouvelle application bénéficiant ainsi, de toutes les bonnes propriétés associées au paradigme composants. L'objectif de ma thèse est de proposer une méthode semi-automatique pour identifier une architecture à base de composants dans une application orientée objets. Cette architecture doit, non seulement aider à la compréhension de l'application originale, mais aussi simplifier la projection de cette dernière dans un modèle concret de composant. L'identification d'une architecture à base de composants est réalisée en trois grandes étapes: i) obtention des données nécessaires au processus d'identification. Elles correspondent aux dépendances entre les classes et sont obtenues avec une analyse dynamique de l'application cible. ii) identification des composants. Trois méthodes ont été explorées. La première utilise un treillis de Galois, la seconde deux méta-heuristiques et la dernière une méta-heuristique multi-objective. iii) identification de l'architecture à base de composants de l'application cible. Cela est fait en identifiant les interfaces requises et fournis pour chaque composant. Afin de valider ce processus d'identification, ainsi que les différents choix faits durant son développement, j'ai réalisé différentes études de cas. Enfin, je montre la faisabilité de la projection de l'architecture à base de composants identifiée vers un modèle concret de composants.
Resumo:
Assembly job shop scheduling problem (AJSP) is one of the most complicated combinatorial optimization problem that involves simultaneously scheduling the processing and assembly operations of complex structured products. The problem becomes even more complicated if a combination of two or more optimization criteria is considered. This thesis addresses an assembly job shop scheduling problem with multiple objectives. The objectives considered are to simultaneously minimizing makespan and total tardiness. In this thesis, two approaches viz., weighted approach and Pareto approach are used for solving the problem. However, it is quite difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. Two metaheuristic techniques namely, genetic algorithm and tabu search are investigated in this thesis for solving the multiobjective assembly job shop scheduling problems. Three algorithms based on the two metaheuristic techniques for weighted approach and Pareto approach are proposed for the multi-objective assembly job shop scheduling problem (MOAJSP). A new pairing mechanism is developed for crossover operation in genetic algorithm which leads to improved solutions and faster convergence. The performances of the proposed algorithms are evaluated through a set of test problems and the results are reported. The results reveal that the proposed algorithms based on weighted approach are feasible and effective for solving MOAJSP instances according to the weight assigned to each objective criterion and the proposed algorithms based on Pareto approach are capable of producing a number of good Pareto optimal scheduling plans for MOAJSP instances.
Resumo:
El proyecto de investigación parte de la dinámica del modelo de distribución tercerizada para una compañía de consumo masivo en Colombia, especializada en lácteos, que para este estudio se ha denominado “Lactosa”. Mediante datos de panel con estudio de caso, se construyen dos modelos de demanda por categoría de producto y distribuidor y mediante simulación estocástica, se identifican las variables relevantes que inciden sus estructuras de costos. El problema se modela a partir del estado de resultados por cada uno de los cuatro distribuidores analizados en la región central del país. Se analiza la estructura de costos y el comportamiento de ventas dado un margen (%) de distribución logístico, en función de las variables independientes relevantes, y referidas al negocio, al mercado y al entorno macroeconómico, descritas en el objeto de estudio. Entre otros hallazgos, se destacan brechas notorias en los costos de distribución y costos en la fuerza de ventas, pese a la homogeneidad de segmentos. Identifica generadores de valor y costos de mayor dispersión individual y sugiere uniones estratégicas de algunos grupos de distribuidores. La modelación con datos de panel, identifica las variables relevantes de gestión que inciden sobre el volumen de ventas por categoría y distribuidor, que focaliza los esfuerzos de la dirección. Se recomienda disminuir brechas y promover desde el productor estrategias focalizadas a la estandarización de procesos internos de los distribuidores; promover y replicar los modelos de análisis, sin pretender remplazar conocimiento de expertos. La construcción de escenarios fortalece de manera conjunta y segura la posición competitiva de la compañía y sus distribuidores.
Resumo:
La implementació de la Directiva Europea 91/271/CEE referent a tractament d'aigües residuals urbanes va promoure la construcció de noves instal·lacions al mateix temps que la introducció de noves tecnologies per tractar nutrients en àrees designades com a sensibles. Tant el disseny d'aquestes noves infraestructures com el redisseny de les ja existents es va portar a terme a partir d'aproximacions basades fonamentalment en objectius econòmics degut a la necessitat d'acabar les obres en un període de temps relativament curt. Aquests estudis estaven basats en coneixement heurístic o correlacions numèriques provinents de models determinístics simplificats. Així doncs, moltes de les estacions depuradores d'aigües residuals (EDARs) resultants van estar caracteritzades per una manca de robustesa i flexibilitat, poca controlabilitat, amb freqüents problemes microbiològics de separació de sòlids en el decantador secundari, elevats costos d'operació i eliminació parcial de nutrients allunyant-les de l'òptim de funcionament. Molts d'aquestes problemes van sorgir degut a un disseny inadequat, de manera que la comunitat científica es va adonar de la importància de les etapes inicials de disseny conceptual. Precisament per aquesta raó, els mètodes tradicionals de disseny han d'evolucionar cap a sistemes d'avaluació mes complexos, que tinguin en compte múltiples objectius, assegurant així un millor funcionament de la planta. Tot i la importància del disseny conceptual tenint en compte múltiples objectius, encara hi ha un buit important en la literatura científica tractant aquest camp d'investigació. L'objectiu que persegueix aquesta tesi és el de desenvolupar un mètode de disseny conceptual d'EDARs considerant múltiples objectius, de manera que serveixi d'eina de suport a la presa de decisions al seleccionar la millor alternativa entre diferents opcions de disseny. Aquest treball de recerca contribueix amb un mètode de disseny modular i evolutiu que combina diferent tècniques com: el procés de decisió jeràrquic, anàlisi multicriteri, optimació preliminar multiobjectiu basada en anàlisi de sensibilitat, tècniques d'extracció de coneixement i mineria de dades, anàlisi multivariant i anàlisi d'incertesa a partir de simulacions de Monte Carlo. Això s'ha aconseguit subdividint el mètode de disseny desenvolupat en aquesta tesis en quatre blocs principals: (1) generació jeràrquica i anàlisi multicriteri d'alternatives, (2) anàlisi de decisions crítiques, (3) anàlisi multivariant i (4) anàlisi d'incertesa. El primer dels blocs combina un procés de decisió jeràrquic amb anàlisi multicriteri. El procés de decisió jeràrquic subdivideix el disseny conceptual en una sèrie de qüestions mes fàcilment analitzables i avaluables mentre que l'anàlisi multicriteri permet la consideració de diferent objectius al mateix temps. D'aquesta manera es redueix el nombre d'alternatives a avaluar i fa que el futur disseny i operació de la planta estigui influenciat per aspectes ambientals, econòmics, tècnics i legals. Finalment aquest bloc inclou una anàlisi de sensibilitat dels pesos que proporciona informació de com varien les diferents alternatives al mateix temps que canvia la importància relativa del objectius de disseny. El segon bloc engloba tècniques d'anàlisi de sensibilitat, optimització preliminar multiobjectiu i extracció de coneixement per donar suport al disseny conceptual d'EDAR, seleccionant la millor alternativa un cop s'han identificat decisions crítiques. Les decisions crítiques són aquelles en les que s'ha de seleccionar entre alternatives que compleixen de forma similar els objectius de disseny però amb diferents implicacions pel que respecte a la futura estructura i operació de la planta. Aquest tipus d'anàlisi proporciona una visió més àmplia de l'espai de disseny i permet identificar direccions desitjables (o indesitjables) cap on el procés de disseny pot derivar. El tercer bloc de la tesi proporciona l'anàlisi multivariant de les matrius multicriteri obtingudes durant l'avaluació de les alternatives de disseny. Específicament, les tècniques utilitzades en aquest treball de recerca engloben: 1) anàlisi de conglomerats, 2) anàlisi de components principals/anàlisi factorial i 3) anàlisi discriminant. Com a resultat és possible un millor accés a les dades per realitzar la selecció de les alternatives, proporcionant més informació per a una avaluació mes efectiva, i finalment incrementant el coneixement del procés d'avaluació de les alternatives de disseny generades. En el quart i últim bloc desenvolupat en aquesta tesi, les diferents alternatives de disseny són avaluades amb incertesa. L'objectiu d'aquest bloc és el d'estudiar el canvi en la presa de decisions quan una alternativa és avaluada incloent o no incertesa en els paràmetres dels models que descriuen el seu comportament. La incertesa en el paràmetres del model s'introdueix a partir de funcions de probabilitat. Desprès es porten a terme simulacions Monte Carlo, on d'aquestes distribucions se n'extrauen números aleatoris que es subsisteixen pels paràmetres del model i permeten estudiar com la incertesa es propaga a través del model. Així és possible analitzar la variació en l'acompliment global dels objectius de disseny per a cada una de les alternatives, quines són les contribucions en aquesta variació que hi tenen els aspectes ambientals, legals, econòmics i tècnics, i finalment el canvi en la selecció d'alternatives quan hi ha una variació de la importància relativa dels objectius de disseny. En comparació amb les aproximacions tradicionals de disseny, el mètode desenvolupat en aquesta tesi adreça problemes de disseny/redisseny tenint en compte múltiples objectius i múltiples criteris. Al mateix temps, el procés de presa de decisions mostra de forma objectiva, transparent i sistemàtica el perquè una alternativa és seleccionada en front de les altres, proporcionant l'opció que més bé acompleix els objectius marcats, mostrant els punts forts i febles, les principals correlacions entre objectius i alternatives, i finalment tenint en compte la possible incertesa inherent en els paràmetres del model que es fan servir durant les anàlisis. Les possibilitats del mètode desenvolupat es demostren en aquesta tesi a partir de diferents casos d'estudi: selecció del tipus d'eliminació biològica de nitrogen (cas d'estudi # 1), optimització d'una estratègia de control (cas d'estudi # 2), redisseny d'una planta per aconseguir eliminació simultània de carboni, nitrogen i fòsfor (cas d'estudi # 3) i finalment anàlisi d'estratègies control a nivell de planta (casos d'estudi # 4 i # 5).
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
Antenna arrays are able to provide high and controlled directivity, which are suitable for radiobase stations, radar systems, and point-to-point or satellite links. The optimization of an array design is usually a hard task because of the non-linear characteristic of multiobjective, requiring the application of numerical techniques, such as genetic algorithms. Therefore, in order to optimize the electronic control of the antenna array radiation pattem through genetic algorithms in real codification, it was developed a numerical tool which is able to positioning the array major lobe, reducing the side lobe levels, canceling interference signals in specific directions of arrival, and improving the antenna radiation performance. This was accomplished by using antenna theory concepts and optimization methods, mainly genetic algorithms ones, allowing to develop a numerical tool with creative genes codification and crossover rules, which is one of the most important contribution of this work. The efficiency of the developed genetic algorithm tool is tested and validated in several antenna and propagation applications. 11 was observed that the numerical results attend the specific requirements, showing the developed tool ability and capacity to handle the considered problems, as well as a great perspective for application in future works.
Resumo:
This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables
Resumo:
Energy policies and technological progress in the development of wind turbines have made wind power the fastest growing renewable power source worldwide. The inherent variability of this resource requires special attention when analyzing the impacts of high penetration on the distribution network. A time-series steady-state analysis is proposed that assesses technical issues such as energy export, losses, and short-circuit levels. A multiobjective programming approach based on the nondominated sorting genetic algorithm (NSGA) is applied in order to find configurations that maximize the integration of distributed wind power generation (DWPG) while satisfying voltage and thermal limits. The approach has been applied to a medium voltage distribution network considering hourly demand and wind profiles for part of the U.K. The Pareto optimal solutions obtained highlight the drawbacks of using a single demand and generation scenario, and indicate the importance of appropriate substation voltage settings for maximizing the connection of MPG.