886 resultados para Optimization. Markov Chain. Genetic Algorithm. Fuzzy Controller


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this thesis was to identify the optimal design parameters for a jet nozzle which obtains a local maximum shear stress while maximizing the average shear stress on the floor of a fluid filled system. This research examined how geometric parameters of a jet nozzle, such as the nozzle's angle, height, and orifice, influence the shear stress created on the bottom surface of a tank. Simulations were run using a Computational Fluid Dynamics (CFD) software package to determine shear stress values for a parameterized geometric domain including the jet nozzle. A response surface was created based on the shear stress values obtained from 112 simulated designs. A multi-objective optimization software utilized the response surface to generate designs with the best combination of parameters to achieve maximum shear stress and maximum average shear stress. The optimal configuration of parameters achieved larger shear stress values over a commercially available design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many classical as well as modern optimization techniques exist. One such modern method belonging to the field of swarm intelligence is termed ant colony optimization. This relatively new concept in optimization involves the use of artificial ants and is based on real ant behavior inspired by the way ants search for food. In this thesis, a novel ant colony optimization technique for continuous domains was developed. The goal was to provide improvements in computing time and robustness when compared to other optimization algorithms. Optimization function spaces can have extreme topologies and are therefore difficult to optimize. The proposed method effectively searched the domain and solved difficult single-objective optimization problems. The developed algorithm was run for numerous classic test cases for both single and multi-objective problems. The results demonstrate that the method is robust, stable, and that the number of objective function evaluations is comparable to other optimization algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The profitability of momentum portfolios in the equity markets is derived from the continuation of stock returns over medium time horizons. The empirical evidence of momentum, however, is significantly different across markets around the world. The purpose of this dissertation is to: 1) help global investors determine the optimal selection and holding periods for momentum portfolios, 2) evaluate the profitability of the optimized momentum portfolios in different time periods and market states, 3) assess the investment strategy profits after considering transaction costs, and 4) interpret momentum returns within the framework of prior studies on investors’ behavior. Improving on the traditional practice of selecting arbitrary selection and holding periods, a genetic algorithm (GA) is employed. The GA performs a thorough and structured search to capture the return continuations and reversals patterns of momentum portfolios. Three portfolio formation methods are used: price momentum, earnings momentum, and earnings and price momentum and a non-linear optimization procedure (GA). The focus is on common equity of the U.S. and a select number of countries, including Australia, France, Germany, Japan, the Netherlands, Sweden, Switzerland and the United Kingdom. The findings suggest that the evolutionary algorithm increases the annualized profits of the U.S. momentum portfolios. However, the difference in mean returns is statistically significant only in certain cases. In addition, after considering transaction costs, both price and earnings and price momentum portfolios do not appear to generate abnormal returns. Positive risk-adjusted returns net of trading costs are documented solely during “up” markets for a portfolio long in prior winners only. The results on the international momentum effects indicate that the GA improves the momentum returns by 2 to 5% on an annual basis. In addition, the relation between momentum returns and exchange rate appreciation/depreciation is examined. The currency appreciation does not appear to influence significantly momentum profits. Further, the influence of the market state on momentum returns is not uniform across the countries considered. The implications of the above findings are discussed with a focus on the practical aspects of momentum investing, both in the U.S. and globally.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, over 15,000 Ion Mobility Spectrometry (IMS) analyzers are employed at worldwide security checkpoints to detect explosives and illicit drugs. Current portal IMS instruments and other electronic nose technologies detect explosives and drugs by analyzing samples containing the headspace air and loose particles residing on a surface. Canines can outperform these systems at sampling and detecting the low vapor pressure explosives and drugs, such as RDX, PETN, cocaine, and MDMA, because these biological detectors target the volatile signature compounds available in the headspace rather than the non-volatile parent compounds of explosives and drugs. In this dissertation research volatile signature compounds available in the headspace over explosive and drug samples were detected using SPME as a headspace sampling tool coupled to an IMS analyzer. A Genetic Algorithm (GA) technique was developed to optimize the operating conditions of a commercial IMS (GE Itemizer 2), leading to the successful detection of plastic explosives (Detasheet, Semtex H, and C-4) and illicit drugs (cocaine, MDMA, and marijuana). Short sampling times (between 10 sec to 5 min) were adequate to extract and preconcentrate sufficient analytes (> 20 ng) representing the volatile signatures in the headspace of a 15 mL glass vial or a quart-sized can containing ≤ 1 g of the bulk explosive or drug. Furthermore, a research grade IMS with flexibility for changing operating conditions and physical configurations was designed and fabricated to accommodate future research into different analytes or physical configurations. The design and construction of the FIU-IMS were facilitated by computer modeling and simulation of ion’s behavior within an IMS. The simulation method developed uses SIMION/SDS and was evaluated with experimental data collected using a commercial IMS (PCP Phemto Chem 110). The FIU-IMS instrument has comparable performance to the GE Itemizer 2 (average resolving power of 14, resolution of 3 between two drugs and two explosives, and LODs range from 0.7 to 9 ng). The results from this dissertation further advance the concept of targeting volatile components to presumptively detect the presence of concealed bulk explosives and drugs by SPME-IMS, and the new FIU-IMS provides a flexible platform for future IMS research projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors would like to express their gratitude to organizations and people that supported this research. Piotr Omenzetter’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research. Ben Ryder of Aurecon and Graeme Cummings of HEB Construction assisted in obtaining access to the bridge and information for modelling. Luke Williams and Graham Bougen, undergraduate research students, assisted with testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markov Chain analysis was recently proposed to assess the time scales and preferential pathways into biological or physical networks by computing residence time, first passage time, rates of transfer between nodes and number of passages in a node. We propose to adapt an algorithm already published for simple systems to physical systems described with a high resolution hydrodynamic model. The method is applied to bays and estuaries on the Eastern Coast of Canada for their interest in shellfish aquaculture. Current velocities have been computed by using a 2 dimensional grid of elements and circulation patterns were summarized by averaging Eulerian flows between adjacent elements. Flows and volumes allow computing probabilities of transition between elements and to assess the average time needed by virtual particles to move from one element to another, the rate of transfer between two elements, and the average residence time of each system. We also combined transfer rates and times to assess the main pathways of virtual particles released in farmed areas and the potential influence of farmed areas on other areas. We suggest that Markov chain is complementary to other sets of ecological indicators proposed to analyse the interactions between farmed areas - e.g. depletion index, carrying capacity assessment. Markov Chain has several advantages with respect to the estimation of connectivity between pair of sites. It makes possible to estimate transfer rates and times at once in a very quick and efficient way, without the need to perform long term simulations of particle or tracer concentration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evolutionary algorithms alone cannot solve optimization problems very efficiently since there are many random (not very rational) decisions in these algorithms. Combination of evolutionary algorithms and other techniques have been proven to be an efficient optimization methodology. In this talk, I will explain the basic ideas of our three algorithms along this line (1): Orthogonal genetic algorithm which treats crossover/mutation as an experimental design problem, (2) Multiobjective evolutionary algorithm based on decomposition (MOEA/D) which uses decomposition techniques from traditional mathematical programming in multiobjective optimization evolutionary algorithm, and (3) Regular model based multiobjective estimation of distribution algorithms (RM-MEDA) which uses the regular property and machine learning methods for improving multiobjective evolutionary algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combinatorial optimization problems have been strongly addressed throughout history. Their study involves highly applied problems that must be solved in reasonable times. This doctoral Thesis addresses three Operations Research problems: the first deals with the Traveling Salesman Problem with Pickups and Delivery with Handling cost, which was approached with two metaheuristics based on Iterated Local Search; the results show that the proposed methods are faster and obtain good results respect to the metaheuristics from the literature. The second problem corresponds to the Quadratic Multiple Knapsack Problem, and polynomial formulations and relaxations are presented for new instances of the problem; in addition, a metaheuristic and a matheuristic are proposed that are competitive with state of the art algorithms. Finally, an Open-Pit Mining problem is approached. This problem is solved with a parallel genetic algorithm that allows excavations using truncated cones. Each of these problems was computationally tested with difficult instances from the literature, obtaining good quality results in reasonable computational times, and making significant contributions to the state of the art techniques of Operations Research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os sistemas biológicos são surpreendentemente flexíveis pra processar informação proveniente do mundo real. Alguns organismos biológicos possuem uma unidade central de processamento denominada de cérebro. O cérebro humano consiste de 10(11) neurônios e realiza processamento inteligente de forma exata e subjetiva. A Inteligência Artificial (IA) tenta trazer para o mundo da computação digital a heurística dos sistemas biológicos de várias maneiras, mas, ainda resta muito para que isso seja concretizado. No entanto, algumas técnicas como Redes neurais artificiais e lógica fuzzy tem mostrado efetivas para resolver problemas complexos usando a heurística dos sistemas biológicos. Recentemente o numero de aplicação dos métodos da IA em sistemas zootécnicos tem aumentado significativamente. O objetivo deste artigo é explicar os princípios básicos da resolução de problemas usando heurística e demonstrar como a IA pode ser aplicada para construir um sistema especialista para resolver problemas na área de zootecnia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Hepatitis B virus (HBV) can be classified into nine genotypes (A-I) defined by sequence divergence of more than 8% based on the complete genome. This study aims to identify the genotypic distribution of HBV in 40 HBsAg-positive patients from Rondonia, Brazil. A fragment of 1306 bp partially comprising surface and polymerase overlapping genes was amplified by PCR. Amplified DNA was purified and sequenced. Amplified DNA was purified and sequenced on an ABI PRISM (R) 377 Automatic Sequencer (Applied Biosystems, Foster City, CA, USA). The obtained sequences were aligned with reference sequences obtained from the GenBank using Clustal X software and then edited with Se-Al software. Phylogenetic analyses were conducted by the Markov Chain Monte Carlo (MCMC) approach using BEAST v.1.5.3. Results: The subgenotypes distribution was A1 (37.1%), D3 (22.8%), F2a (20.0%), D4 (17.1%) and D2 (2.8%). Conclusions: These results for the first HBV genotypic characterization in Rondonia state are consistent with other studies in Brazil, showing the presence of several HBV genotypes that reflects the mixed origin of the population, involving descendants from Native Americans, Europeans, and Africans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: GB virus C (GBV-C) is an enveloped positive-sense ssRNA virus belonging to the Flaviviridae family. Studies on the genetic variability of the GBV-C reveals the existence of six genotypes: genotype 1 predominates in West Africa, genotype 2 in Europe and America, genotype 3 in Asia, genotype 4 in Southwest Asia, genotype 5 in South Africa and genotype 6 in Indonesia. The aim of this study was to determine the frequency and genotypic distribution of GBV-C in the Colombian population. Methods: Two groups were analyzed: i) 408 Colombian blood donors infected with HCV (n = 250) and HBV (n = 158) from Bogota and ii) 99 indigenous people with HBV infection from Leticia, Amazonas. A fragment of 344 bp from the 5' untranslated region (5' UTR) was amplified by nested RT PCR. Viral sequences were genotyped by phylogenetic analysis using reference sequences from each genotype obtained from GenBank (n = 160). Bayesian phylogenetic analyses were conducted using Markov chain Monte Carlo (MCMC) approach to obtain the MCC tree using BEAST v. 1.5.3. Results: Among blood donors, from 158 HBsAg positive samples, eight 5.06% (n = 8) were positive for GBV-C and from 250 anti-HCV positive samples, 3.2%(n = 8) were positive for GBV-C. Also, 7.7% (n = 7) GBV-C positive samples were found among indigenous people from Leticia. A phylogenetic analysis revealed the presence of the following GBV-C genotypes among blood donors: 2a (41.6%), 1 (33.3%), 3 (16.6%) and 2b (8.3%). All genotype 1 sequences were found in co-infection with HBV and 4/5 sequences genotype 2a were found in co-infection with HCV. All sequences from indigenous people from Leticia were classified as genotype 3. The presence of GBV-C infection was not correlated with the sex (p = 0.43), age (p = 0.38) or origin (p = 0.17). Conclusions: It was found a high frequency of GBV-C genotype 1 and 2 in blood donors. The presence of genotype 3 in indigenous population was previously reported from Santa Marta region in Colombia and in native people from Venezuela and Bolivia. This fact may be correlated to the ancient movements of Asian people to South America a long time ago.