942 resultados para Multi-objective optimization techniques
Resumo:
The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.
Resumo:
Phylogenetic inference consist in the search of an evolutionary tree to explain the best way possible genealogical relationships of a set of species. Phylogenetic analysis has a large number of applications in areas such as biology, ecology, paleontology, etc. There are several criterias which has been defined in order to infer phylogenies, among which are the maximum parsimony and maximum likelihood. The first one tries to find the phylogenetic tree that minimizes the number of evolutionary steps needed to describe the evolutionary history among species, while the second tries to find the tree that has the highest probability of produce the observed data according to an evolutionary model. The search of a phylogenetic tree can be formulated as a multi-objective optimization problem, which aims to find trees which satisfy simultaneously (and as much as possible) both criteria of parsimony and likelihood. Due to the fact that these criteria are different there won't be a single optimal solution (a single tree), but a set of compromise solutions. The solutions of this set are called "Pareto Optimal". To find this solutions, evolutionary algorithms are being used with success nowadays.This algorithms are a family of techniques, which aren’t exact, inspired by the process of natural selection. They usually find great quality solutions in order to resolve convoluted optimization problems. The way this algorithms works is based on the handling of a set of trial solutions (trees in the phylogeny case) using operators, some of them exchanges information between solutions, simulating DNA crossing, and others apply aleatory modifications, simulating a mutation. The result of this algorithms is an approximation to the set of the “Pareto Optimal” which can be shown in a graph with in order that the expert in the problem (the biologist when we talk about inference) can choose the solution of the commitment which produces the higher interest. In the case of optimization multi-objective applied to phylogenetic inference, there is open source software tool, called MO-Phylogenetics, which is designed for the purpose of resolving inference problems with classic evolutionary algorithms and last generation algorithms. REFERENCES [1] C.A. Coello Coello, G.B. Lamont, D.A. van Veldhuizen. Evolutionary algorithms for solving multi-objective problems. Spring. Agosto 2007 [2] C. Zambrano-Vega, A.J. Nebro, J.F Aldana-Montes. MO-Phylogenetics: a phylogenetic inference software tool with multi-objective evolutionary metaheuristics. Methods in Ecology and Evolution. En prensa. Febrero 2016.
Resumo:
Efficient hill climbers have been recently proposed for single- and multi-objective pseudo-Boolean optimization problems. For $k$-bounded pseudo-Boolean functions where each variable appears in at most a constant number of subfunctions, it has been theoretically proven that the neighborhood of a solution can be explored in constant time. These hill climbers, combined with a high-level exploration strategy, have shown to improve state of the art methods in experimental studies and open the door to the so-called Gray Box Optimization, where part, but not all, of the details of the objective functions are used to better explore the search space. One important limitation of all the previous proposals is that they can only be applied to unconstrained pseudo-Boolean optimization problems. In this work, we address the constrained case for multi-objective $k$-bounded pseudo-Boolean optimization problems. We find that adding constraints to the pseudo-Boolean problem has a linear computational cost in the hill climber.
Resumo:
To analyze the characteristics and predict the dynamic behaviors of complex systems over time, comprehensive research to enable the development of systems that can intelligently adapt to the evolving conditions and infer new knowledge with algorithms that are not predesigned is crucially needed. This dissertation research studies the integration of the techniques and methodologies resulted from the fields of pattern recognition, intelligent agents, artificial immune systems, and distributed computing platforms, to create technologies that can more accurately describe and control the dynamics of real-world complex systems. The need for such technologies is emerging in manufacturing, transportation, hazard mitigation, weather and climate prediction, homeland security, and emergency response. Motivated by the ability of mobile agents to dynamically incorporate additional computational and control algorithms into executing applications, mobile agent technology is employed in this research for the adaptive sensing and monitoring in a wireless sensor network. Mobile agents are software components that can travel from one computing platform to another in a network and carry programs and data states that are needed for performing the assigned tasks. To support the generation, migration, communication, and management of mobile monitoring agents, an embeddable mobile agent system (Mobile-C) is integrated with sensor nodes. Mobile monitoring agents visit distributed sensor nodes, read real-time sensor data, and perform anomaly detection using the equipped pattern recognition algorithms. The optimal control of agents is achieved by mimicking the adaptive immune response and the application of multi-objective optimization algorithms. The mobile agent approach provides potential to reduce the communication load and energy consumption in monitoring networks. The major research work of this dissertation project includes: (1) studying effective feature extraction methods for time series measurement data; (2) investigating the impact of the feature extraction methods and dissimilarity measures on the performance of pattern recognition; (3) researching the effects of environmental factors on the performance of pattern recognition; (4) integrating an embeddable mobile agent system with wireless sensor nodes; (5) optimizing agent generation and distribution using artificial immune system concept and multi-objective algorithms; (6) applying mobile agent technology and pattern recognition algorithms for adaptive structural health monitoring and driving cycle pattern recognition; (7) developing a web-based monitoring network to enable the visualization and analysis of real-time sensor data remotely. Techniques and algorithms developed in this dissertation project will contribute to research advances in networked distributed systems operating under changing environments.
Resumo:
The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.
Resumo:
La gestione del fine vita dei prodotti è un argomento di interesse attuale per le aziende; sempre più spesso l’imprese non possono più esimersi dall’implementare un efficiente sistema di Reverse Logistics. Per rispondere efficacemente a queste nuove esigenze diventa fondamentale ampliare i tradizionali sistemi logistici verso tutte quelle attività svolte all’interno della Reverse Logitics. Una gestione efficace ed efficiente dell’intera supply chain è un aspetto di primaria importanza per un’azienda ed incide notevolmente sulla sua competitività; proprio per perseguire questo obiettivo, sempre più aziende promuovono politiche di gestione delle supply chain sia Lean che Green. L’obiettivo di questo lavoro, nato dalle esigenze descritte sopra, è quello di applicare un modello innovativo che consideri sia politiche di gestione Lean, che dualmente politiche Green, alla gestione di una supply chain del settore automotive, comprendente anche le attività di gestione dei veicoli fuori uso (ELV). Si è analizzato per prima cosa i principi base e gli strumenti utilizzati per l’applicazione della Lean Production e del Green supply chain management e in seguito si è analizzato le caratteristiche distintive della Reverse Logistics e in particolare delle reti che trattano i veicoli a fine vita. L’obiettivo finale dello studio è quello di elaborare e implementare, tramite l’utilizzo del software AMPL, un modello di ottimizzazione multi-obiettivo (MOP- Multi Objective Optimization) Lean e Green a una Reverse Supply Chain dei veicoli a fine vita. I risultati ottenuti evidenziano che è possibile raggiungere un ottimo compromesso tra le due logiche. E' stata effettuata anche una valutazione economica dei risultati ottenuti, che ha evidenziato come il trade-off scelto rappresenti anche uno degli scenari con minor costi.
Resumo:
This thesis deals with optimization techniques and modeling of vehicular networks. Thanks to the models realized with the integer linear programming (ILP) and the heuristic ones, it was possible to study the performances in 5G networks for the vehicular. Thanks to Software-defined networking (SDN) and Network functions virtualization (NFV) paradigms it was possible to study the performances of different classes of service, such as the Ultra Reliable Low Latency Communications (URLLC) class and enhanced Mobile BroadBand (eMBB) class, and how the functional split can have positive effects on network resource management. Two different protection techniques have been studied: Shared Path Protection (SPP) and Dedicated Path Protection (DPP). Thanks to these different protections, it is possible to achieve different network reliability requirements, according to the needs of the end user. Finally, thanks to a simulator developed in Python, it was possible to study the dynamic allocation of resources in a 5G metro network. Through different provisioning algorithms and different dynamic resource management techniques, useful results have been obtained for understanding the needs in the vehicular networks that will exploit 5G. Finally, two models are shown for reconfiguring backup resources when using shared resource protection.
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
The reconstruction of power industries has brought fundamental changes to both power system operation and planning. This paper presents a new planning method using multi-objective optimization (MOOP) technique, as well as human knowledge, to expand the transmission network in open access schemes. The method starts with a candidate pool of feasible expansion plans. Consequent selection of the best candidates is carried out through a MOOP approach, of which multiple objectives are tackled simultaneously, aiming at integrating the market operation and planning as one unified process in context of deregulated system. Human knowledge has been applied in both stages to ensure the selection with practical engineering and management concerns. The expansion plan from MOOP is assessed by reliability criteria before it is finalized. The proposed method has been tested with the IEEE 14-bus system and relevant analyses and discussions have been presented.
Resumo:
Generating manipulator trajectories considering multiple objectives and obstacle avoidance is a non-trivial optimization problem. In this paper a multi-objective genetic algorithm based technique is proposed to address this problem. Multiple criteria are optimized considering up to five simultaneous objectives. Simulation results are presented for robots with two and three degrees of freedom, considering two and five objectives optimization. A subsequent analysis of the spread and solutions distribution along the converged non-dominated Pareto front is carried out, in terms of the achieved diversity.
Resumo:
Smart grids with an intensive penetration of distributed energy resources will play an important role in future power system scenarios. The intermittent nature of renewable energy sources brings new challenges, requiring an efficient management of those sources. Additional storage resources can be beneficially used to address this problem; the massive use of electric vehicles, particularly of vehicle-to-grid (usually referred as gridable vehicles or V2G), becomes a very relevant issue. This paper addresses the impact of Electric Vehicles (EVs) in system operation costs and in power demand curve for a distribution network with large penetration of Distributed Generation (DG) units. An efficient management methodology for EVs charging and discharging is proposed, considering a multi-objective optimization problem. The main goals of the proposed methodology are: to minimize the system operation costs and to minimize the difference between the minimum and maximum system demand (leveling the power demand curve). The proposed methodology perform the day-ahead scheduling of distributed energy resources in a distribution network with high penetration of DG and a large number of electric vehicles. It is used a 32-bus distribution network in the case study section considering different scenarios of EVs penetration to analyze their impact in the network and in the other energy resources management.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
Généralement, les problèmes de conception de réseaux consistent à sélectionner les arcs et les sommets d’un graphe G de sorte que la fonction coût est optimisée et l’ensemble de contraintes impliquant les liens et les sommets dans G sont respectées. Une modification dans le critère d’optimisation et/ou dans l’ensemble de contraintes mène à une nouvelle représentation d’un problème différent. Dans cette thèse, nous nous intéressons au problème de conception d’infrastructure de réseaux maillés sans fil (WMN- Wireless Mesh Network en Anglais) où nous montrons que la conception de tels réseaux se transforme d’un problème d’optimisation standard (la fonction coût est optimisée) à un problème d’optimisation à plusieurs objectifs, pour tenir en compte de nombreux aspects, souvent contradictoires, mais néanmoins incontournables dans la réalité. Cette thèse, composée de trois volets, propose de nouveaux modèles et algorithmes pour la conception de WMNs où rien n’est connu à l’ avance. Le premiervolet est consacré à l’optimisation simultanée de deux objectifs équitablement importants : le coût et la performance du réseau en termes de débit. Trois modèles bi-objectifs qui se différent principalement par l’approche utilisée pour maximiser la performance du réseau sont proposés, résolus et comparés. Le deuxième volet traite le problème de placement de passerelles vu son impact sur la performance et l’extensibilité du réseau. La notion de contraintes de sauts (hop constraints) est introduite dans la conception du réseau pour limiter le délai de transmission. Un nouvel algorithme basé sur une approche de groupage est proposé afin de trouver les positions stratégiques des passerelles qui favorisent l’extensibilité du réseau et augmentent sa performance sans augmenter considérablement le coût total de son installation. Le dernier volet adresse le problème de fiabilité du réseau dans la présence de pannes simples. Prévoir l’installation des composants redondants lors de la phase de conception peut garantir des communications fiables, mais au détriment du coût et de la performance du réseau. Un nouvel algorithme, basé sur l’approche théorique de décomposition en oreilles afin d’installer le minimum nombre de routeurs additionnels pour tolérer les pannes simples, est développé. Afin de résoudre les modèles proposés pour des réseaux de taille réelle, un algorithme évolutionnaire (méta-heuristique), inspiré de la nature, est développé. Finalement, les méthodes et modèles proposés on été évalués par des simulations empiriques et d’événements discrets.
Resumo:
Depuis quelques années, la recherche dans le domaine des réseaux maillés sans fil ("Wireless Mesh Network (WMN)" en anglais) suscite un grand intérêt auprès de la communauté des chercheurs en télécommunications. Ceci est dû aux nombreux avantages que la technologie WMN offre, telles que l'installation facile et peu coûteuse, la connectivité fiable et l'interopérabilité flexible avec d'autres réseaux existants (réseaux Wi-Fi, réseaux WiMax, réseaux cellulaires, réseaux de capteurs, etc.). Cependant, plusieurs problèmes restent encore à résoudre comme le passage à l'échelle, la sécurité, la qualité de service (QdS), la gestion des ressources, etc. Ces problèmes persistent pour les WMNs, d'autant plus que le nombre des utilisateurs va en se multipliant. Il faut donc penser à améliorer les protocoles existants ou à en concevoir de nouveaux. L'objectif de notre recherche est de résoudre certaines des limitations rencontrées à l'heure actuelle dans les WMNs et d'améliorer la QdS des applications multimédia temps-réel (par exemple, la voix). Le travail de recherche de cette thèse sera divisé essentiellement en trois principaux volets: le contrôle d‟admission du trafic, la différentiation du trafic et la réaffectation adaptative des canaux lors de la présence du trafic en relève ("handoff" en anglais). Dans le premier volet, nous proposons un mécanisme distribué de contrôle d'admission se basant sur le concept des cliques (une clique correspond à un sous-ensemble de liens logiques qui interfèrent les uns avec les autres) dans un réseau à multiples-sauts, multiples-radios et multiples-canaux, appelé RCAC. Nous proposons en particulier un modèle analytique qui calcule le ratio approprié d'admission du trafic et qui garantit une probabilité de perte de paquets dans le réseau n'excédant pas un seuil prédéfini. Le mécanisme RCAC permet d‟assurer la QdS requise pour les flux entrants, sans dégrader la QdS des flux existants. Il permet aussi d‟assurer la QdS en termes de longueur du délai de bout en bout pour les divers flux. Le deuxième volet traite de la différentiation de services dans le protocole IEEE 802.11s afin de permettre une meilleure QdS, notamment pour les applications avec des contraintes temporelles (par exemple, voix, visioconférence). À cet égard, nous proposons un mécanisme d'ajustement de tranches de temps ("time-slots"), selon la classe de service, ED-MDA (Enhanced Differentiated-Mesh Deterministic Access), combiné à un algorithme efficace de contrôle d'admission EAC (Efficient Admission Control), afin de permettre une utilisation élevée et efficace des ressources. Le mécanisme EAC prend en compte le trafic en relève et lui attribue une priorité supérieure par rapport au nouveau trafic pour minimiser les interruptions de communications en cours. Dans le troisième volet, nous nous intéressons à minimiser le surcoût et le délai de re-routage des utilisateurs mobiles et/ou des applications multimédia en réaffectant les canaux dans les WMNs à Multiples-Radios (MR-WMNs). En premier lieu, nous proposons un modèle d'optimisation qui maximise le débit, améliore l'équité entre utilisateurs et minimise le surcoût dû à la relève des appels. Ce modèle a été résolu par le logiciel CPLEX pour un nombre limité de noeuds. En second lieu, nous élaborons des heuristiques/méta-heuristiques centralisées pour permettre de résoudre ce modèle pour des réseaux de taille réelle. Finalement, nous proposons un algorithme pour réaffecter en temps-réel et de façon prudente les canaux aux interfaces. Cet algorithme a pour objectif de minimiser le surcoût et le délai du re-routage spécialement du trafic dynamique généré par les appels en relève. Ensuite, ce mécanisme est amélioré en prenant en compte l‟équilibrage de la charge entre cliques.
Resumo:
Heilkräuter sind während des Trocknungsprozesses zahlreichen Einflüssen ausgesetzt, welche die Qualität des Endproduktes entscheidend beeinflussen. Diese Forschungsarbeit beschäftigt sich mit der Trocknung von Zitronenmelisse (Melissa officinalis .L) zu einem qualitativ hochwertigen Endprodukt. Es werden Strategien zur Trocknung vorgeschlagen, die experimentelle und mathematische Aspekte mit einbeziehen, um bei einer adäquaten Produktivität die erforderlichen Qualitätsmerkmale im Hinblick auf Farbeänderung und Gehalt an ätherischen Ölen zu erzielen. Getrocknete Zitronenmelisse kann zurzeit, auf Grund verschiedener Probleme beim Trocknungsvorgang, den hohen Qualitätsanforderungen des Marktes nicht immer genügen. Es gibt keine standardisierten Informationen zu den einzelnen und komplexen Trocknungsparametern. In der Praxis beruht die Trocknung auf Erfahrungswerten, bzw. werden Vorgehensweisen bei der Trocknung anderer Pflanzen kopiert, und oftmals ist die Trocknung nicht reproduzierbar, oder beruht auf subjektiven Annäherungen. Als Folge dieser nicht angepassten Wahl der Trocknungsparameter entstehen oftmals Probleme wie eine Übertrocknung, was zu erhöhten Bruchverlusten der Blattmasse führt, oder eine zu geringe Trocknung, was wiederum einen zu hohen Endfeuchtegehalt im Produkt zur Folge hat. Dies wiederum mündet zwangsläufig in einer nicht vertretbaren Farbänderung und einen übermäßigen Verlust an ätherischen Ölen. Auf Grund der unterschiedlichen thermischen und mechanischen Eigenschaften von Blättern und Stängel, ist eine ungleichmäßige Trocknung die Regel. Es wird außerdem eine unnötig lange Trocknungsdauer beobachtet, die zu einem erhöhten Energieverbrauch führt. Das Trocknen in solaren Tunneln Trocknern bringt folgendes Problem mit sich: wegen des ungeregelten Strahlungseinfalles ist es schwierig die Trocknungstemperatur zu regulieren. Ebenso beeinflusst die Strahlung die Farbe des Produktes auf Grund von photochemischen Reaktionen. Zusätzlich erzeugen die hohen Schwankungen der Strahlung, der Temperatur und der Luftfeuchtigkeit instabile Bedingungen für eine gleichmäßige und kontrollierbare Trocknung. In Anbetracht der erwähnten Probleme werden folgende Forschungsschwerpunkte in dieser Arbeit gesetzt: neue Strategien zur Verbesserung der Qualität werden entwickelt, mit dem Ziel die Trocknungszeit und den Energieverbrauch zu verringern. Um eine Methodik vorzuschlagen, die auf optimalen Trocknungsparameter beruht, wurden Temperatur und Luftfeuchtigkeit als Variable in Abhängigkeit der Trocknungszeit, des ätherischer Ölgehaltes, der Farbänderung und der erforderliche Energie betrachtet. Außerdem wurden die genannten Parametern und deren Auswirkungen auf die Qualitätsmerkmale in solaren Tunnel Trocknern analysiert. Um diese Ziele zu erreichen, wurden unterschiedliche Ansätze verfolgt. Die Sorption-Isothermen und die Trocknungskinetik von Zitronenmelisse und deren entsprechende Anpassung an verschiedene mathematische Modelle wurden erarbeitet. Ebenso wurde eine alternative gestaffelte Trocknung in gestufte Schritte vorgenommen, um die Qualität des Endproduktes zu erhöhen und gleichzeitig den Gesamtenergieverbrauch zu senken. Zusätzlich wurde ein statistischer Versuchsplan nach der CCD-Methode (Central Composite Design) und der RSM-Methode (Response Surface Methodology) vorgeschlagen, um die gewünschten Qualitätsmerkmalen und den notwendigen Energieeinsatz in Abhängigkeit von Lufttemperatur und Luftfeuchtigkeit zu erzielen. Anhand der gewonnenen Daten wurden Regressionsmodelle erzeugt, und das Verhalten des Trocknungsverfahrens wurde beschrieben. Schließlich wurde eine statistische DOE-Versuchsplanung (design of experiments) angewandt, um den Einfluss der Parameter auf die zu erzielende Produktqualität in einem solaren Tunnel Trockner zu bewerten. Die Wirkungen der Beschattung, der Lage im Tunnel, des Befüllungsgrades und der Luftgeschwindigkeit auf Trocknungszeit, Farbänderung und dem Gehalt an ätherischem Öl, wurde analysiert. Ebenso wurden entsprechende Regressionsmodelle bei der Anwendung in solaren Tunneltrocknern erarbeitet. Die wesentlichen Ergebnisse werden in Bezug auf optimale Trocknungsparameter in Bezug auf Qualität und Energieverbrauch analysiert.