81 resultados para Evolutionary computation

em CentAUR: Central Archive University of Reading - UK


Relevância:

70.00% 70.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper deals with the design of optimal multiple gravity assist trajectories with deep space manoeuvres. A pruning method which considers the sequential nature of the problem is presented. The method locates feasible vectors using local optimization and applies a clustering algorithm to find reduced bounding boxes which can be used in a subsequent optimization step. Since multiple local minima remain within the pruned search space, the use of a global optimization method, such as Differential Evolution, is suggested for finding solutions which are likely to be close to the global optimum. Two case studies are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A fast Knowledge-based Evolution Strategy, KES, for the multi-objective minimum spanning tree, is presented. The proposed algorithm is validated, for the bi-objective case, with an exhaustive search for small problems (4-10 nodes), and compared with a deterministic algorithm, EPDA and NSGA-II for larger problems (up to 100 nodes) using benchmark hard instances. Experimental results show that KES finds the true Pareto fronts for small instances of the problem and calculates good approximation Pareto sets for larger instances tested. It is shown that the fronts calculated by YES are superior to NSGA-II fronts and almost as good as those established by EPDA. KES is designed to be scalable to multi-objective problems and fast due to its small complexity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The synapsing variable-length crossover (SVLC algorithm provides a biologically inspired method for performing meaningful crossover between variable-length genomes. In addition to providing a rationale for variable-length crossover, it also provides a genotypic similarity metric for variable-length genomes, enabling standard niche formation techniques to be used with variable-length genomes. Unlike other variable-length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses non-random crossover points based on the common parental sequence similarity. The SVLC algorithm recurrently "glues" or synapses homogenous genetic subsequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable-length test problem, the SVLC algorithm compares favorably with current variable-length crossover techniques. The variable-length approach is further advocated by demonstrating how a variable-length genetic algorithm (GA) can obtain a high fitness solution in fewer iterations than a traditional fixed-length GA in a two-dimensional vector approximation task.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave-one-out (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in two-class classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunable-node RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixed-node RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper is concerned with the use of a genetic algorithm to select financial ratios for corporate distress classification models. For this purpose, the fitness value associated to a set of ratios is made to reflect the requirements of maximizing the amount of information available for the model and minimizing the collinearity between the model inputs. A case study involving 60 failed and continuing British firms in the period 1997-2000 is used for illustration. The classification model based on ratios selected by the genetic algorithm compares favorably with a model employing ratios usually found in the financial distress literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present some additions to a fuzzy variable radius niche technique called Dynamic Niche Clustering (DNC) (Gan and Warwick, 1999; 2000; 2001) that enable the identification and creation of niches of arbitrary shape through a mechanism called Niche Linkage. We show that by using this mechanism it is possible to attain better feature extraction from the underlying population.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes the recent developments and improvements made to the variable radius niching technique called Dynamic Niche Clustering (DNC). DNC is fitness sharing based technique that employs a separate population of overlapping fuzzy niches with independent radii which operate in the decoded parameter space, and are maintained alongside the normal GA population. We describe a speedup process that can be applied to the initial generation which greatly reduces the complexity of the initial stages. A split operator is also introduced that is designed to counteract the excessive growth of niches, and it is shown that this improves the overall robustness of the technique. Finally, the effect of local elitism is documented and compared to the performance of the basic DNC technique on a selection of 2D test functions. The paper is concluded with a view to future work to be undertaken on the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe and evaluate a new estimator of the effective population size (N-e), a critical parameter in evolutionary and conservation biology. This new "SummStat" N-e. estimator is based upon the use of summary statistics in an approximate Bayesian computation framework to infer N-e. Simulations of a Wright-Fisher population with known N-e show that the SummStat estimator is useful across a realistic range of individuals and loci sampled, generations between samples, and N-e values. We also address the paucity of information about the relative performance of N-e estimators by comparing the SUMMStat estimator to two recently developed likelihood-based estimators and a traditional moment-based estimator. The SummStat estimator is the least biased of the four estimators compared. In 32 of 36 parameter combinations investigated rising initial allele frequencies drawn from a Dirichlet distribution, it has the lowest bias. The relative mean square error (RMSE) of the SummStat estimator was generally intermediate to the others. All of the estimators had RMSE > 1 when small samples (n = 20, five loci) were collected a generation apart. In contrast, when samples were separated by three or more generations and Ne less than or equal to 50, the SummStat and likelihood-based estimators all had greatly reduced RMSE. Under the conditions simulated, SummStat confidence intervals were more conservative than the likelihood-based estimators and more likely to include true N-e. The greatest strength of the SummStat estimator is its flexible structure. This flexibility allows it to incorporate any, potentially informative summary statistic from Population genetic data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many pathogens transmit to new hosts by both infection (horizontal transmission) and transfer to the infected host's offspring (vertical transmission). These two transmission modes require speci®c adap- tations of the pathogen that can be mutually exclusive, resulting in a trade-off between horizontal and vertical transmission. We show that in mathematical models such trade-offs can lead to the simultaneous existence of two evolutionary stable states (evolutionary bi-stability) of allocation of resources to the two modes of transmission. We also show that jumping between evolutionary stable states can be induced by gradual environmental changes. Using quantitative PCR-based estimates of abundance in seed and vege- tative parts, we show that the pathogen of wheat, Phaeosphaeria nodorum, has jumped between two distinct states of transmission mode twice in the past 160 years, which, based on published evidence, we interpret as adaptation to environmental change. The ®nding of evolutionary bi-stability has impli- cations for human, animal and other plant diseases. An ill-judged change in a disease control programme could cause the pathogen to evolve a new, and possibly more damaging, combination of transmission modes. Similarly, environmental changes can shift the balance between transmission modes, with adverse effects on human, animal and plant health.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. The management of threatened species is an important practical way in which conservationists can intervene in the extinction process and reduce the loss of biodiversity. Understanding the causes of population declines (past, present and future) is pivotal to designing effective practical management. This is the declining-population paradigm identified by Caughley. 2. There are three broad classes of ecological tool used by conservationists to guide management decisions for threatened species: statistical models of habitat use, demographic models and behaviour-based models. Each of these is described here, illustrated with a case study and evaluated critically in terms of its practical application. 3. These tools are fundamentally different. Statistical models of habitat use and demographic models both use descriptions of patterns in abundance and demography, in relation to a range of factors, to inform management decisions. In contrast, behaviourbased models describe the evolutionary processes underlying these patterns, and derive such patterns from the strategies employed by individuals when competing for resources under a specific set of environmental conditions. 4. Statistical models of habitat use and demographic models have been used successfully to make management recommendations for declining populations. To do this, assumptions are made about population growth or vital rates that will apply when environmental conditions are restored, based on either past data collected under favourable environmental conditions or estimates of these parameters when the agent of decline is removed. As a result, they can only be used to make reliable quantitative predictions about future environments when a comparable environment has been experienced by the population of interest in the past. 5. Many future changes in the environment driven by management will not have been experienced by a population in the past. Under these circumstances, vital rates and their relationship with population density will change in the future in a way that is not predictable from past patterns. Reliable quantitative predictions about population-level responses then need to be based on an explicit consideration of the evolutionary processes operating at the individual level. 6. Synthesis and applications. It is argued that evolutionary theory underpins Caughley’s declining-population paradigm, and that it needs to become much more widely used within mainstream conservation biology. This will help conservationists examine critically the reliability of the tools they have traditionally used to aid management decision-making. It will also give them access to alternative tools, particularly when predictions are required for changes in the environment that have not been experienced by a population in the past.