947 resultados para Bio-inspired optimization techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today , Providing drinking water and process water is one of the major problems in most countries ; the surface water often need to be treated to achieve necessary quality, and in this way, technological and also financial difficulties cause great restrictions in operating the treatment units. Although water supply by simple and cheap systems has been one of the important objectives in most scientific and research centers in the world, still a great percent of population in developing countries, especially in rural areas, don't benefit well quality water. One of the big and available sources for providing acceptable water is sea water. There are two ways to treat sea water first evaporation and second reverse osmosis system. Nowadays R.O system has been used for desalination because of low budget price and easily to operate and maintenance. The sea water should be pretreated before R.O plants, because there is some difficulties in raw sea water that can decrease yield point of membranes in R.O system. The subject of this research may be useful in this way, and we hope to be able to achieve complete success in design and construction of useful pretreatment systems for R.O plant. One of the most important units in the sea water pretreatment plant is filtration, the conventional method for filtration is pressurized sand filters, and the subject of this research is about new filtration which is called continuous back wash sand filtration (CBWSF). The CBWSF designed and tested in this research may be used more economically with less difficulty. It consists two main parts first shell body and second central part comprising of airlift pump, raw water feeding pipe, air supply hose, backwash chamber and sand washer as well as inlet and outlet connections. The CBWSF is a continuously operating filter, i.e. the filter does not have to be taken out of operation for backwashing or cleaning. Inlet water is fed through the sand bed while the sand bed is moving downwards. The water gets filtered while the sand becomes dirty. Simultaneously, the dirty sand is cleaned in the sand washer and the suspended solids are discharged in backwash water. We analyze the behavior of CBWSF in pretreatment of sea water instead of pressurized sand filter. There is one important factor which is not suitable for R.O membranes, it is bio-fouling. This factor is defined by Silt Density Index (SDI).measured by SDI. In this research has been focused on decreasing of SDI and NTU. Based on this goal, the prototype of pretreatment had been designed and manufactured to test. The system design was done mainly by using the design fundamentals of CBWSF. The automatic backwash sand filter can be used in small and also big water supply schemes. In big water treatment plants, the units of filters perform the filtration and backwash stages separately, and in small treatment plants, the unit is usually compacted to achieve less energy consumption. The analysis of the system showed that it may be used feasibly for water treating, especially for limited population. The construction is rapid, simple and economic, and its performance is high enough because no mobile mechanical part is used in it, so it may be proposed as an effective method to improve the water quality and consequently the hygiene level in the remote places of the country.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evolutionary algorithms alone cannot solve optimization problems very efficiently since there are many random (not very rational) decisions in these algorithms. Combination of evolutionary algorithms and other techniques have been proven to be an efficient optimization methodology. In this talk, I will explain the basic ideas of our three algorithms along this line (1): Orthogonal genetic algorithm which treats crossover/mutation as an experimental design problem, (2) Multiobjective evolutionary algorithm based on decomposition (MOEA/D) which uses decomposition techniques from traditional mathematical programming in multiobjective optimization evolutionary algorithm, and (3) Regular model based multiobjective estimation of distribution algorithms (RM-MEDA) which uses the regular property and machine learning methods for improving multiobjective evolutionary algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phylogenetic inference consist in the search of an evolutionary tree to explain the best way possible genealogical relationships of a set of species. Phylogenetic analysis has a large number of applications in areas such as biology, ecology, paleontology, etc. There are several criterias which has been defined in order to infer phylogenies, among which are the maximum parsimony and maximum likelihood. The first one tries to find the phylogenetic tree that minimizes the number of evolutionary steps needed to describe the evolutionary history among species, while the second tries to find the tree that has the highest probability of produce the observed data according to an evolutionary model. The search of a phylogenetic tree can be formulated as a multi-objective optimization problem, which aims to find trees which satisfy simultaneously (and as much as possible) both criteria of parsimony and likelihood. Due to the fact that these criteria are different there won't be a single optimal solution (a single tree), but a set of compromise solutions. The solutions of this set are called "Pareto Optimal". To find this solutions, evolutionary algorithms are being used with success nowadays.This algorithms are a family of techniques, which aren’t exact, inspired by the process of natural selection. They usually find great quality solutions in order to resolve convoluted optimization problems. The way this algorithms works is based on the handling of a set of trial solutions (trees in the phylogeny case) using operators, some of them exchanges information between solutions, simulating DNA crossing, and others apply aleatory modifications, simulating a mutation. The result of this algorithms is an approximation to the set of the “Pareto Optimal” which can be shown in a graph with in order that the expert in the problem (the biologist when we talk about inference) can choose the solution of the commitment which produces the higher interest. In the case of optimization multi-objective applied to phylogenetic inference, there is open source software tool, called MO-Phylogenetics, which is designed for the purpose of resolving inference problems with classic evolutionary algorithms and last generation algorithms. REFERENCES [1] C.A. Coello Coello, G.B. Lamont, D.A. van Veldhuizen. Evolutionary algorithms for solving multi-objective problems. Spring. Agosto 2007 [2] C. Zambrano-Vega, A.J. Nebro, J.F Aldana-Montes. MO-Phylogenetics: a phylogenetic inference software tool with multi-objective evolutionary metaheuristics. Methods in Ecology and Evolution. En prensa. Febrero 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agricultural crops can be damaged by funguses, insects, worms and other organisms that cause diseases and decrease the yield of production. The effect of these damaging agents can be reduced using pesticides. Among them, triazole compounds are effective substances against fungus; for example, Oidium. Nevertheless, it has been detected that the residues of these fungicides in foods as well as in derivate products can affect the health of the consumers. Therefore, the European Union has established several regulations fixing the maximum residue of pesticide levels in a wide range of foods trying to assure the consumer safety. Hence, it is very important to develop adequate methods to determine these pesticide compounds. In most cases, gas or liquid chromatographic (GC, LC) separations are used in the analysis of the samples. But firstly, it is necessary to use proper sample treatments in order to preconcentrate and isolate the target analytes. To reach this aim, microextraction techniques are very effective tools; because allow to do both preconcentration and extraction of the analytes in one simple step that considerably reduces the source of errors. With these objectives, two remarkable techniques have been widely used during the last years: solid phase microextraction (SPME) and liquid phase microextraction (LPME) with its different options. Both techniques that avoid the use or reduce the amount of toxic solvents are convenient coupled to chromatographic equipments providing good quantitative results in a wide number of matrices and compounds. In this work simple and reliable methods have been developed using SPME and ultrasound assisted emulsification microextraction (USAEME) coupled to GC or LC for triazole fungicides determination. The proposed methods allow confidently determine triazole concentrations of μg L‐1 order in different fruit samples. Chemometric tools have been used to accomplish successful determinations. Firstly, in the selection and optimization of the variables involved in the microextraction processes; and secondly, to overcome the problems related to the overlapping peaks. Different fractional factorial designs have been used for the screening of the experimental variables; and central composite designs have been carried out to get the best experimental conditions. Trying to solve the overlapping peak problems multivariate calibration methods have been used. Parallel Factor Analysis 2 (PARAFAC2), Multivariate Curve Resolution (MCR) and Parallel Factor Analysis with Linear Dependencies (PARALIND) have been proposed, the adequate algorithms have been used according to data characteristics, and the results have been compared. Because its occurrence in Basque Country and its relevance in the production of cider and txakoli regional wines the grape and apple samples were selected. These crops are often treated with triazole compounds trying to solve the problems caused by the funguses. The peel and pulp from grape and apple, their juices and some commercial products such as musts, juice and cider have been analysed showing the adequacy of the developed methods for the triazole determination in this kind of fruit samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ergosterol, a molecule with high commercial value, is the most abundant mycosterol in Agaricus bisporus L. To replace common conventional extraction techniques (e.g. Soxhlet), the present study reports the optimal ultrasound-assisted extraction conditions for ergosterol. After preliminary tests, the results showed that solvents, time and ultrasound power altered the extraction efficiency. Using response surface methodology, models were developed to investigate the favourable experimental conditions that maximize the extraction efficiency. All statistical criteria demonstrated the validity of the proposed models. Overall, ultrasound-assisted extraction with ethanol at 375 W during 15 min proved to be as efficient as the Soxhlet extraction, yielding 671.5 ± 0.5mg ergosterol/100 g dw. However, with n-hexane extracts with higher purity (mg ergosterol/g extract) were obtained. Finally, it was proposed for the removal of the saponification step, which simplifies the extraction process and makes it more feasible for its industrial transference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safe operation of unmanned aerial vehicles (UAVs) over populated areas requires reducing the risk posed by a UAV if it crashed during its operation. We considered several types of UAV risk-based path planning problems and developed techniques for estimating the risk to third parties on the ground. The path planning problem requires making trade-offs between risk and flight time. Four optimization approaches for solving the problem were tested; a network-based approach that used a greedy algorithm to improve the original solution generated the best solutions with the least computational effort. Additionally, an approach for solving a combined design and path planning problems was developed and tested. This approach was extended to solve robust risk-based path planning problem in which uncertainty about wind conditions would affect the risk posed by a UAV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.

Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.

In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.

In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.

The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.

We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.

In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.