945 resultados para Multiobjective evolutionary algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

* Supported by projects CCG08-UAM TIC-4425-2009 and TEC2007-68065-C03-02

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a genetic algorithm (GA) is applied on Maximum Betweennes Problem (MBP). The maximum of the objective function is obtained by finding a permutation which satisfies a maximal number of betweenness constraints. Every permutation considered is genetically coded with an integer representation. Standard operators are used in the GA. Instances in the experimental results are randomly generated. For smaller dimensions, optimal solutions of MBP are obtained by total enumeration. For those instances, the GA reached all optimal solutions except one. The GA also obtained results for larger instances of up to 50 elements and 1000 triples. The running time of execution and finding optimal results is quite short.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The profitability of momentum portfolios in the equity markets is derived from the continuation of stock returns over medium time horizons. The empirical evidence of momentum, however, is significantly different across markets around the world. The purpose of this dissertation is to: (1) help global investors determine the optimal selection and holding periods for momentum portfolios, (2) evaluate the profitability of the optimized momentum portfolios in different time periods and market states, (3) assess the investment strategy profits after considering transaction costs, and (4) interpret momentum returns within the framework of prior studies on investors’ behavior. Improving on the traditional practice of selecting arbitrary selection and holding periods, a genetic algorithm (GA) is employed. The GA performs a thorough and structured search to capture the return continuations and reversals patterns of momentum portfolios. Three portfolio formation methods are used: price momentum, earnings momentum, and earnings and price momentum and a non-linear optimization procedure (GA). The focus is on common equity of the U.S. and a select number of countries, including Australia, France, Germany, Japan, the Netherlands, Sweden, Switzerland and the United Kingdom. The findings suggest that the evolutionary algorithm increases the annualized profits of the U.S. momentum portfolios. However, the difference in mean returns is statistically significant only in certain cases. In addition, after considering transaction costs, both price and earnings and price momentum portfolios do not appear to generate abnormal returns. Positive risk-adjusted returns net of trading costs are documented solely during “up” markets for a portfolio long in prior winners only. The results on the international momentum effects indicate that the GA improves the momentum returns by 2 to 5% on an annual basis. In addition, the relation between momentum returns and exchange rate appreciation/depreciation is examined. The currency appreciation does not appear to influence significantly momentum profits. Further, the influence of the market state on momentum returns is not uniform across the countries considered. The implications of the above findings are discussed with a focus on the practical aspects of momentum investing, both in the U.S. and globally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The profitability of momentum portfolios in the equity markets is derived from the continuation of stock returns over medium time horizons. The empirical evidence of momentum, however, is significantly different across markets around the world. The purpose of this dissertation is to: 1) help global investors determine the optimal selection and holding periods for momentum portfolios, 2) evaluate the profitability of the optimized momentum portfolios in different time periods and market states, 3) assess the investment strategy profits after considering transaction costs, and 4) interpret momentum returns within the framework of prior studies on investors’ behavior. Improving on the traditional practice of selecting arbitrary selection and holding periods, a genetic algorithm (GA) is employed. The GA performs a thorough and structured search to capture the return continuations and reversals patterns of momentum portfolios. Three portfolio formation methods are used: price momentum, earnings momentum, and earnings and price momentum and a non-linear optimization procedure (GA). The focus is on common equity of the U.S. and a select number of countries, including Australia, France, Germany, Japan, the Netherlands, Sweden, Switzerland and the United Kingdom. The findings suggest that the evolutionary algorithm increases the annualized profits of the U.S. momentum portfolios. However, the difference in mean returns is statistically significant only in certain cases. In addition, after considering transaction costs, both price and earnings and price momentum portfolios do not appear to generate abnormal returns. Positive risk-adjusted returns net of trading costs are documented solely during “up” markets for a portfolio long in prior winners only. The results on the international momentum effects indicate that the GA improves the momentum returns by 2 to 5% on an annual basis. In addition, the relation between momentum returns and exchange rate appreciation/depreciation is examined. The currency appreciation does not appear to influence significantly momentum profits. Further, the influence of the market state on momentum returns is not uniform across the countries considered. The implications of the above findings are discussed with a focus on the practical aspects of momentum investing, both in the U.S. and globally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dendritic cell algorithm (DCA) is an immune-inspired algorithm, developed for the purpose of anomaly detection. The algorithm performs multi-sensor data fusion and correlation which results in a ‘context aware’ detection system. Previous applications of the DCA have included the detection of potentially malicious port scanning activity, where it has produced high rates of true positives and low rates of false positives. In this work we aim to compare the performance of the DCA and of a self-organizing map (SOM) when applied to the detection of SYN port scans, through experimental analysis. A SOM is an ideal candidate for comparison as it shares similarities with the DCA in terms of the data fusion method employed. It is shown that the results of the two systems are comparable, and both produce false positives for the same processes. This shows that the DCA can produce anomaly detection results to the same standard as an established technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims. We determine the age and mass of the three best solar twin candidates in open cluster M 67 through lithium evolutionary models. Methods. We computed a grid of evolutionary models with non-standard mixing at metallicity [Fe/H] = 0.01 with the Toulouse-Geneva evolution code for a range of stellar masses. We estimated the mass and age of 10 solar analogs belonging to the open cluster M 67. We made a detailed study of the three solar twins of the sample, YPB637, YPB1194, and YPB1787. Results. We obtained a very accurate estimation of the mass of our solar analogs in M 67 by interpolating in the grid of evolutionary models. The three solar twins allowed us to estimate the age of the open cluster, which is 3.87(-0.66)(+0.55) Gyr, which is better constrained than former estimates. Conclusions. Our results show that the 3 solar twin candidates have one solar mass within the errors and that M 67 has a solar age within the errors, validating its use as a solar proxy. M 67 is an important cluster when searching for solar twins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Compact groups of galaxies are entities that have high densities of galaxies and serve as laboratories to study galaxy interactions, intergalactic star formation and galaxy evolution. Aims. The main goal of this study is to search for young objects in the intragroup medium of seven compact groups of galaxies: HCG 2, 7, 22, 23, 92, 100 and NGC 92 as well as to evaluate the stage of interaction of each group. Methods. We used Fabry-Perot velocity fields and rotation curves together with GALEX NUV and FUV images and optical R-band and HI maps. Results. (i) HCG 7 and HCG 23 are in early stages of interaction; (ii) HCG 2 and HCG 22 are mildly interacting; and (iii) HCG 92, HCG 100 and NGC 92 are in late stages of evolution. We find that all three evolved groups contain populations of young blue objects in the intragroup medium, consistent with ages < 100 Myr, of which several are younger than < 10 Myr. We also report the discovery of a tidal dwarf galaxy candidate in the tail of NGC 92. These three groups, besides containing galaxies that have peculiar velocity fields, also show extended HI tails. Conclusions. Our results indicate that the advanced stage of evolution of a group, together with the presence of intragroup HI clouds, may lead to star formation in the intragroup medium. A table containing all intergalactic HII regions and tidal dwarf galaxies confirmed to date is appended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Tight binaries discovered in young, nearby associations are ideal targets for providing dynamical mass measurements to test the physics of evolutionary models at young ages and very low masses. Aims. We report the binarity of TWA22 for the first time. We aim at monitoring the orbit of this young and tight system to determine its total dynamical mass using an accurate distance determination. We also intend to characterize the physical properties (luminosity, effective temperature, and surface gravity) of each component based on near-infrared photometric and spectroscopic observations. Methods. We used the adaptive-optics assisted imager NACO to resolve the components, to monitor the complete orbit and to obtain the relative near-infrared photometry of TWA22 AB. The adaptive-optics assisted integral field spectrometer SINFONI was also used to obtain medium-resolution (R(lambda) = 1500-2000) spectra in JHK bands. Comparison with empirical and synthetic librairies were necessary for deriving the spectral type, the effective temperature, and the surface gravity for each component of the system. Results. Based on an accurate trigonometric distance (17.5 +/- 0.2 pc) determination, we infer a total dynamical mass of 220 +/- 21 M(Jup) for the system. From the complete set of spectra, we find an effective temperature T(eff) = 2900(-200)(+200) K for TWA22A and T(eff) = 2900(-100)(+200) for TWA22 B and surface gravities between 4.0 and 5.5 dex. From our photometry and an M6 +/- 1 spectral type for both components, we find luminosities of log(L/L(circle dot)) = -2.11 +/- 0.13 dex and log(L/L(circle dot)) = -2.30 +/- 0.16 dex for TWA22 A and B, respectively. By comparing these parameters with evolutionary models, we question the age and the multiplicity of this system. We also discuss a possible underestimation of the mass predicted by evolutionary models for young stars close to the substellar boundary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Previous analyses of lithium abundances in main sequence and red giant stars have revealed the action of mixing mechanisms other than convection in stellar interiors. Beryllium abundances in stars with Li abundance determinations can offer valuable complementary information on the nature of these mechanisms. Aims. Our aim is to derive Be abundances along the whole evolutionary sequence of an open cluster. We focus on the well-studied open cluster IC 4651. These Be abundances are used with previously determined Li abundances, in the same sample stars, to investigate the mixing mechanisms in a range of stellar masses and evolutionary stages. Methods. Atmospheric parameters were adopted from a previous abundance analysis by the same authors. New Be abundances have been determined from high-resolution, high signal-to-noise UVES spectra using spectrum synthesis and model atmospheres. The careful synthetic modeling of the Be lines region is used to calculate reliable abundances in rapidly rotating stars. The observed behavior of Be and Li is compared to theoretical predictions from stellar models including rotation-induced mixing, internal gravity waves, atomic diffusion, and thermohaline mixing. Results. Beryllium is detected in all the main sequence and turn-off sample stars, both slow- and fast-rotating stars, including the Li-dip stars, but is not detected in the red giants. Confirming previous results, we find that the Li dip is also a Be dip, although the depletion of Be is more modest than for Li in the corresponding effective temperature range. For post-main-sequence stars, the Be dilution starts earlier within the Hertzsprung gap than expected from classical predictions, as does the Li dilution. A clear dispersion in the Be abundances is also observed. Theoretical stellar models including the hydrodynamical transport processes mentioned above are able to reproduce all the observed features well. These results show a good theoretical understanding of the Li and Be behavior along the color-magnitude diagram of this intermediate-age cluster for stars more massive than 1.2 M(circle dot).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study characterised the population genetic structure of Plebeia remota through mitochondrial DNA (mtDNA) analysis and evaluated evolutionary and ecological processes that may have contributed to the species current genetic scenario. Seventy feral nests were sampled representing four geographic regions (Cunha, Curitiba, Prudentopolis, and Blumenau). Fifteen composite mtDNA haplotypes were determined and a high genetic structure was detected among all populations. The current population structure may be a result of queen philopatry and vegetation shifts caused by palaeoclimatic changes and uplift of Brazilian coastal ranges. Finally, this study strongly suggests a revision of the taxonomic status of P. remota from the Prudentopolis region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A total of 172 persons from nine South Amerindian, three African and one Eskimo populations were studied in relation to the Paired box gene 9 (PAX9) exon 3 (138 base pairs) as well as its 5' and 3' flanking intronic segments (232 bp and 220 bp, respectively) and integrated with the information available for the same genetic region from individuals of different geographical origins. Nine mutations were scored in exon 3 and six in its flanking regions; four of them are new South American tribe-specific singletons. Exon3 nucleotide diversity is several orders of magnitude higher than its intronic regions. Additionally, a set of variants in the PAX9 and 101 other genes related with dentition can define at least some dental morphological differences between Sub-Saharan Africans and non-Africans, probably associated with adaptations after the modern human exodus from Africa. Exon 3 of PAX9 could be a good molecular example of how evolvability works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Discussion surrounding the settlement of the New World has recently gained momentum with advances in molecular biology, archaeology and bioanthropology. Recent evidence from these diverse fields is found to support different colonization scenarios. The currently available genetic evidence suggests a ""single migration'' model, in which both early and later Native American groups derive from one expansion event into the continent. In contrast, the pronounced anatomical differences between early and late Native American populations have led others to propose more complex scenarios, involving separate colonization events of the New World and a distinct origin for these groups. Methodology/Principal Findings: Using large samples of Early American crania, we: 1) calculated the rate of morphological differentiation between Early and Late American samples under three different time divergence assumptions, and compared our findings to the predicted morphological differentiation under neutral conditions in each case; and 2) further tested three dispersal scenarios for the colonization of the New World by comparing the morphological distances among early and late Amerindians, East Asians, Australo-Melanesians and early modern humans from Asia to geographical distances associated with each dispersion model. Results indicate that the assumption of a last shared common ancestor outside the continent better explains the observed morphological differences between early and late American groups. This result is corroborated by our finding that a model comprising two Asian waves of migration coming through Bering into the Americas fits the cranial anatomical evidence best, especially when the effects of diversifying selection to climate are taken into account. Conclusions: We conclude that the morphological diversity documented through time in the New World is best accounted for by a model postulating two waves of human expansion into the continent originating in East Asia and entering through Beringia.