903 resultados para Parallel Evolutionary Algorithms
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
Previous work by Professor John Frazer on Evolutionary Architecture provides a basis for the development of a system evolving architectural envelopes in a generic and abstract manner. Recent research by the authors has focused on the implementation of a virtual environment for the automatic generation and exploration of complex forms and architectural envelopes based on solid modelling techniques and the integration of evolutionary algorithms, enhanced computational and mathematical models. Abstract data types are introduced for genotypes in a genetic algorithm order to develop complex models using generative and evolutionary computing techniques. Multi-objective optimisation techniques are employed for defining the fitness function in the evaluation process.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
The chapter investigates Shock Control Bumps (SCB) on a Natural Laminar Flow (NLF) aerofoil; RAE 5243 for Active Flow Control (AFC). A SCB approach is used to decelerate supersonic flow on the suction/pressure sides of transonic aerofoil that leads delaying shock occurrence or weakening of shock strength. Such an AFC technique reduces significantly the total drag at transonic speeds. This chapter considers the SCB shape design optimisation at two boundary layer transition positions (0 and 45%) using an Euler software coupled with viscous boundary layer effects and robust Evolutionary Algorithms (EAs). The optimisation method is based on a canonical Evolution Strategy (ES) algorithm and incorporates the concepts of hierarchical topology and parallel asynchronous evaluation of candidate solution. Two test cases are considered with numerical experiments; the first test deals with a transition point occurring at the leading edge and the transition point is fixed at 45% of wing chord in the second test. Numerical results are presented and it is demonstrated that an optimal SCB design can be found to significantly reduce transonic wave drag and improves lift on drag (L/D) value when compared to the baseline aerofoil design.
Resumo:
The main objective of this paper is to detail the development of a feasible hardware design based on Evolutionary Algorithms (EAs) to determine flight path planning for Unmanned Aerial Vehicles (UAVs) navigating terrain with obstacle boundaries. The design architecture includes the hardware implementation of Light Detection And Ranging (LiDAR) terrain and EA population memories within the hardware, as well as the EA search and evaluation algorithms used in the optimizing stage of path planning. A synthesisable Very-high-speed integrated circuit Hardware Description Language (VHDL) implementation of the design was developed, for realisation on a Field Programmable Gate Array (FPGA) platform. Simulation results show significant speedup compared with an equivalent software implementation written in C++, suggesting that the present approach is well suited for UAV real-time path planning applications.
Resumo:
This paper investigates the field programmable gate array (FPGA) approach for multi-objective and multi-disciplinary design optimisation (MDO) problems. One class of optimisation method that has been well-studied and established for large and complex problems, such as those inherited in MDO, is multi-objective evolutionary algorithms (MOEAs). The MOEA, nondominated sorting genetic algorithm II (NSGA-II), is hardware implemented on an FPGA chip. The NSGA-II on FPGA application to multi-objective test problem suites has verified the designed implementation effectiveness. Results show that NSGA-II on FPGA is three orders of magnitude better than the PC based counterpart.
Resumo:
There are many applications in aeronautical/aerospace engineering where some values of the design parameters states cannot be provided or determined accurately. These values can be related to the geometry(wingspan, length, angles) and or to operational flight conditions that vary due to the presence of uncertainty parameters (Mach, angle of attack, air density and temperature, etc.). These uncertainty design parameters cannot be ignored in engineering design and must be taken into the optimisation task to produce more realistic and reliable solutions. In this paper, a robust/uncertainty design method with statistical constraints is introduced to produce a set of reliable solutions which have high performance and low sensitivity. Robust design concept coupled with Multi Objective Evolutionary Algorithms (MOEAs) is defined by applying two statistical sampling formulas; mean and variance/standard deviation associated with the optimisation fitness/objective functions. The methodology is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. It is implemented for two practical Unmanned Aerial System (UAS) design problems; the flrst case considers robust multi-objective (single disciplinary: aerodynamics) design optimisation and the second considers a robust multidisciplinary (aero structures) design optimisation. Numerical results show that the solutions obtained by the robust design method with statistical constraints have a more reliable performance and sensitivity in both aerodynamics and structures when compared to the baseline design.
Resumo:
The use of adaptive wing/aerofoil designs is being considered as promising techniques in aeronautic/aerospace since they can reduce aircraft emissions, improve aerodynamic performance of manned or unmanned aircraft. The paper investigates the robust design and optimisation for one type of adaptive techniques; Active Flow Control (AFC) bump at transonic flow conditions on a Natural Laminar Flow (NLF) aerofoil designed to increase aerodynamic efficiency (especially high lift to drag ratio). The concept of using Shock Control Bump (SCB) is to control supersonic flow on the suction/pressure side of NLF aerofoil: RAE 5243 that leads to delaying shock occurrence or weakening its strength. Such AFC technique reduces total drag at transonic speeds due to reduction of wave drag. The location of Boundary Layer Transition (BLT) can influence the position the supersonic shock occurrence. The BLT position is an uncertainty in aerodynamic design due to the many factors, such as surface contamination or surface erosion. The paper studies the SCB shape design optimisation using robust Evolutionary Algorithms (EAs) with uncertainty in BLT positions. The optimisation method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. Two test cases are conducted; the first test assumes the BLT is at 45% of chord from the leading edge and the second test considers robust design optimisation for SCB at the variability of BLT positions and lift coefficient. Numerical result shows that the optimisation method coupled to uncertainty design techniques produces Pareto optimal SCB shapes which have low sensitivity and high aerodynamic performance while having significant total drag reduction.
Resumo:
Premature convergence to local optimal solutions is one of the main difficulties when using evolutionary algorithms in real-world optimization problems. To prevent premature convergence and degeneration phenomenon, this paper proposes a new optimization computation approach, human-simulated immune evolutionary algorithm (HSIEA). Considering that the premature convergence problem is due to the lack of diversity in the population, the HSIEA employs the clonal selection principle of artificial immune system theory to preserve the diversity of solutions for the search process. Mathematical descriptions and procedures of the HSIEA are given, and four new evolutionary operators are formulated which are clone, variation, recombination, and selection. Two benchmark optimization functions are investigated to demonstrate the effectiveness of the proposed HSIEA.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
The work presented in this report is aimed to implement a cost-effective offline mission path planner for aerial inspection tasks of large linear infrastructures. Like most real-world optimisation problems, mission path planning involves a number of objectives which ideally should be minimised simultaneously. Understandably, the objectives of a practical optimisation problem are conflicting each other and the minimisation of one of them necessarily implies the impossibility to minimise the other ones. This leads to the need to find a set of optimal solutions for the problem; once such a set of available options is produced, the mission planning problem is reduced to a decision making problem for the mission specialists, who will choose the solution which best fit the requirements of the mission. The goal of this work is then to develop a Multi-Objective optimisation tool able to provide the mission specialists a set of optimal solutions for the inspection task amongst which the final trajectory will be chosen, given the environment data, the mission requirements and the definition of the objectives to minimise. All the possible optimal solutions of a Multi-Objective optimisation problem are said to form the Pareto-optimal front of the problem. For any of the Pareto-optimal solutions, it is impossible to improve one objective without worsening at least another one. Amongst a set of Pareto-optimal solutions, no solution is absolutely better than another and the final choice must be a trade-off of the objectives of the problem. Multi-Objective Evolutionary Algorithms (MOEAs) are recognised to be a convenient method for exploring the Pareto-optimal front of Multi-Objective optimization problems. Their efficiency is due to their parallelism architecture which allows to find several optimal solutions at each time
Resumo:
This paper describes Mateda-2.0, a MATLAB package for estimation of distribution algorithms (EDAs). This package can be used to solve single and multi-objective discrete and continuous optimization problems using EDAs based on undirected and directed probabilistic graphical models. The implementation contains several methods commonly employed by EDAs. It is also conceived as an open package to allow users to incorporate different combinations of selection, learning, sampling, and local search procedures. Additionally, it includes methods to extract, process and visualize the structures learned by the probabilistic models. This way, it can unveil previously unknown information about the optimization problem domain. Mateda-2.0 also incorporates a module for creating and validating function models based on the probabilistic models learned by EDAs.
Resumo:
Essa dissertação apresenta a implementação de um algoritmo genético paralelo utilizando o modelo de granularidade grossa, também conhecido como modelo das ilhas, para sistemas embutidos multiprocessados. Os sistemas embutidos multiprocessados estão tornando-se cada vez mais complexos, pressionados pela demanda por maior poder computacional requerido pelas aplicações, principalmente de multimídia, Internet e comunicações sem fio, que são executadas nesses sistemas. Algumas das referidas aplicações estão começando a utilizar algoritmos genéticos, que podem ser beneficiados pelas vantagens proporcionadas pelo processamento paralelo disponível em sistemas embutidos multiprocessados. No algoritmo genético paralelo do modelo das ilhas, cada processador do sistema embutido é responsável pela evolução de uma população de forma independente dos demais. A fim de acelerar o processo evolutivo, o operador de migração é executado em intervalos definidos para realizar a migração dos melhores indivíduos entre as ilhas. Diferentes topologias lógicas, tais como anel, vizinhança e broadcast, são analisadas na fase de migração de indivíduos. Resultados experimentais são gerados para a otimização de três funções encontradas na literatura.
Resumo:
Evolutionary algorithms are a common tool in engineering and in the study of natural evolution. Here we take their use in a new direction by showing how they can be made to implement a universal computer. We consider populations of individuals with genes whose values are the variables of interest. By allowing them to interact with one another in a specified environment with limited resources, we demonstrate the ability to construct any arbitrary logic circuit. We explore models based on the limits of small and large populations, and show examples of such a system in action, implementing a simple logic circuit.