941 resultados para Multiobjective Evolutionary Algorithm
Resumo:
Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.
Resumo:
The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generalization of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics. Also, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement. The comparison results show that the computation using our mapper/reducer placement is much cheaper while still satisfying the computation deadline.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NPcomplete. Thus, in this paper we propose a new grouping genetic algorithm for the mappers/reducers placement problem in cloud computing. Compared with the original one, our grouping genetic algorithm uses an innovative coding scheme and also eliminates the inversion operator which is an essential operator in the original grouping genetic algorithm. The new grouping genetic algorithm is evaluated by experiments and the experimental results show that it is much more efficient than four popular algorithms for the problem, including the original grouping genetic algorithm.
Resumo:
A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled – replicated or deleted, to accommodate the user’s load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource underutilisation. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem’s knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.
Resumo:
Organisations are constantly seeking new ways to improve operational efficiencies. This research study investigates a novel way to identify potential efficiency gains in business operations by observing how they are carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how they can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A genetic algorithm-based approach is proposed to explore and assess alternative process execution scenarios, where the objective function is represented by a comprehensive cost structure that captures different process dimensions. Experiments conducted with different variants of the genetic algorithm evaluate the approach's feasibility. The findings demonstrate that a genetic algorithm-based approach is able to make use of cost reduction as a way to identify improved execution scenarios in terms of reduced case durations and increased resource utilisation. The ultimate aim is to utilise cost-related insights gained from such improved scenarios to put forward recommendations for reducing process-related cost within organisations.
Resumo:
Live migration of multiple Virtual Machines (VMs) has become an integral management activity in data centers for power saving, load balancing and system maintenance. While state-of-the-art live migration techniques focus on the improvement of migration performance of an independent single VM, only a little has been investigated to the case of live migration of multiple interacting VMs. Live migration is mostly influenced by the network bandwidth and arbitrarily migrating a VM which has data inter-dependencies with other VMs may increase the bandwidth consumption and adversely affect the performances of subsequent migrations. In this paper, we propose a Random Key Genetic Algorithm (RKGA) that efficiently schedules the migration of a given set of VMs accounting both inter-VM dependency and data center communication network. The experimental results show that the RKGA can schedule the migration of multiple VMs with significantly shorter total migration time and total downtime compared to a heuristic algorithm.
Resumo:
Network Real-Time Kinematic (NRTK) is a technology that can provide centimeter-level accuracy positioning services in real time, and it is enabled by a network of Continuously Operating Reference Stations (CORS). The location-oriented CORS placement problem is an important problem in the design of a NRTK as it will directly affect not only the installation and operational cost of the NRTK, but also the quality of positioning services provided by the NRTK. This paper presents a Memetic Algorithm (MA) for the location-oriented CORS placement problem, which hybridizes the powerful explorative search capacity of a genetic algorithm and the efficient and effective exploitative search capacity of a local optimization. Experimental results have shown that the MA has better performance than existing approaches. In this paper we also conduct an empirical study about the scalability of the MA, effectiveness of the hybridization technique and selection of crossover operator in the MA.
Resumo:
Evolutionary algorithms are playing an increasingly important role as search methods in cognitive science domains. In this study, methodological issues in the use of evolutionary algorithms were investigated via simulations in which procedures were systematically varied to modify the selection pressures on populations of evolving agents. Traditional roulette wheel, tournament, and variations of these selection algorithms were compared on the “needle-in-a-haystack” problem developed by Hinton and Nowlan in their 1987 study of the Baldwin effect. The task is an important one for cognitive science, as it demonstrates the power of learning as a local search technique in smoothing a fitness landscape that lacks gradient information. One aspect that has continued to foster interest in the problem is the observation of residual learning ability in simulated populations even after long periods of time. Effective evolutionary algorithms balance their search effort between broad exploration of the search space and in-depth exploitation of promising solutions already found. Issues discussed include the differential effects of rank and proportional selection, the tradeoff between migration of populations towards good solutions and maintenance of diversity, and the development of measures that illustrate how each selection algorithm affects the search process over generations. We show that both roulette wheel and tournament algorithms can be modified to appropriately balance search between exploration and exploitation, and effectively eliminate residual learning in this problem.
Resumo:
The increase in data center dependent services has made energy optimization of data centers one of the most exigent challenges in today's Information Age. The necessity of green and energy-efficient measures is very high for reducing carbon footprint and exorbitant energy costs. However, inefficient application management of data centers results in high energy consumption and low resource utilization efficiency. Unfortunately, in most cases, deploying an energy-efficient application management solution inevitably degrades the resource utilization efficiency of the data centers. To address this problem, a Penalty-based Genetic Algorithm (GA) is presented in this paper to solve a defined profile-based application assignment problem whilst maintaining a trade-off between the power consumption performance and resource utilization performance. Case studies show that the penalty-based GA is highly scalable and provides 16% to 32% better solutions than a greedy algorithm.
Resumo:
This is a continuation of earlier studies on the evolution of infinite populations of haploid genotypes within a genetic algorithm framework. We had previously explored the evolutionary consequences of the existence of indeterminate—“plastic”—loci, where a plastic locus had a finite probability in each generation of functioning (being switched “on”) or not functioning (being switched “off”). The relative probabilities of the two outcomes were assigned on a stochastic basis. The present paper examines what happens when the transition probabilities are biased by the presence of regulatory genes. We find that under certain conditions regulatory genes can improve the adaptation of the population and speed up the rate of evolution (on occasion at the cost of lowering the degree of adaptation). Also, the existence of regulatory loci potentiates selection in favour of plasticity. There is a synergistic effect of regulatory genes on plastic alleles: the frequency of such alleles increases when regulatory loci are present. Thus, phenotypic selection alone can be a potentiating factor in a favour of better adaptation.
Resumo:
In recent years, identification of sequence patterns has been given immense importance to understand better their significance with respect to genomic organization and evolutionary processes. To this end, an algorithm has been derived to identify all similar sequence repeats present in a protein sequence. The proposed algorithm is useful to correlate the three-dimensional structure of various similar sequence repeats available in the Protein Data Bank against the same sequence repeats present in other databases like SWISS-PROT, PIR and Genome databases.
Resumo:
Many optimal control problems are characterized by their multiple performance measures that are often noncommensurable and competing with each other. The presence of multiple objectives in a problem usually give rise to a set of optimal solutions, largely known as Pareto-optimal solutions. Evolutionary algorithms have been recognized to be well suited for multi-objective optimization because of their capability to evolve a set of nondominated solutions distributed along the Pareto front. This has led to the development of many evolutionary multi-objective optimization algorithms among which Nondominated Sorting Genetic Algorithm (NSGA and its enhanced version NSGA-II) has been found effective in solving a wide variety of problems. Recently, we reported a genetic algorithm based technique for solving dynamic single-objective optimization problems, with single as well as multiple control variables, that appear in fed-batch bioreactor applications. The purpose of this study is to extend this methodology for solution of multi-objective optimal control problems under the framework of NSGA-II. The applicability of the technique is illustrated by solving two optimal control problems, taken from literature, which have usually been solved by several methods as single-objective dynamic optimization problems. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Distant repeats between a pair of protein sequences can be exploited to study the various aspects of proteins such as structure-function relationship, disorders due to protein malfunction, evolutionary analysis, etc. An in-depth analysis of the distant repeats would facilitate to establish a stable evolutionary relation of the repeats with respect to their three-dimensional structure. To this effect, an algorithm has been devised to identify the distant repeats in a pair of protein sequences by essentially using the scores of PAM (Percent Accepted Mutation) matrices. The proposed algorithm will be of much use to researchers involved in the comparative study of various organisms based on the amino-acid repeats in protein sequences. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, a novel genetic algorithm is developed by generating artificial chromosomes with probability control to solve the machine scheduling problems. Generating artificial chromosomes for Genetic Algorithm (ACGA) is closely related to Evolutionary Algorithms Based on Probabilistic Models (EAPM). The artificial chromosomes are generated by a probability model that extracts the gene information from current population. ACGA is considered as a hybrid algorithm because both the conventional genetic operators and a probability model are integrated. The ACGA proposed in this paper, further employs the ``evaporation concept'' applied in Ant Colony Optimization (ACO) to solve the permutation flowshop problem. The ``evaporation concept'' is used to reduce the effect of past experience and to explore new alternative solutions. In this paper, we propose three different methods for the probability of evaporation. This probability of evaporation is applied as soon as a job is assigned to a position in the permutation flowshop problem. Experimental results show that our ACGA with the evaporation concept gives better performance than some algorithms in the literature.
Resumo:
Genetic Algorithms (GAs) are recognized as an alternative class of computational model, which mimic natural evolution to solve problems in a wide domain including machine learning, music generation, genetic synthesis etc. In the present study Genetic Algorithm has been employed to obtain damage assessment of composite structural elements. It is considered that a state of damage can be modeled as reduction in stiffness. The task is to determine the magnitude and location of damage. In a composite plate that is discretized into a set of finite elements, if a jth element is damaged, the GA based technique will predict the reduction in Ex and Ey and the location j. The fact that the natural frequency decreases with decrease in stiffness is made use of in the method. The natural frequency of any two modes of the damaged plates for the assumed damage parameters is facilitated by the use of Eigen sensitivity analysis. The Eigen value sensitivities are the derivatives of the Eigen values with respect to certain design parameters. If ωiu is the natural frequency of the ith mode of the undamaged plate and ωid is that of the damaged plate, with δωi as the difference between the two, while δωk is a similar difference in the kth mode, R is defined as the ratio of the two. For a random selection of Ex,Ey and j, a ratio Ri is obtained. A proper combination of Ex,Ey and j which makes Ri−R=0 is obtained by Genetic Algorithm.