19 resultados para Evolutionary optimization methods
em Aston University Research Archive
Resumo:
This work follows a feasibility study (187) which suggested that a process for purifying wet-process phosphoric acid by solvent extraction should be economically viable. The work was divided into two main areas, (i) chemical and physical measurements on the three-phase system, with or without impurities; (ii) process simulation and optimization. The object was to test the process technically and economically and to optimise the type of solvent. The chemical equilibria and distribution curves for the system water - phosphoric acid - solvent for the solvents n-amyl alcohol, tri-n-butyl phosphate, di-isopropyl ether and methyl isobutyl ketone have been determined. Both pure phosphoric acid and acid containing known amounts of naturally occurring impurities (Fe P0 4 , A1P0 4 , Ca3(P04)Z and Mg 3(P0 4 )Z) were examined. The hydrodynamic characteristics of the systems were also studied. The experimental results obtained for drop size distribution were compared with those obtainable from Hinze's equation (32) and it was found that they deviated by an amount related to the turbulence. A comprehensive literature survey on the purification of wet-process phosphoric acid by organic solvents has been made. The literature regarding solvent extraction fundamentals and equipment and optimization methods for the envisaged process was also reviewed. A modified form of the Kremser-Brown and Souders equation to calculate the number of contact stages was derived. The modification takes into account the special nature of phosphoric acid distribution curves in the studied systems. The process flow-sheet was developed and simulated. Powell's direct search optimization method was selected in conjunction with the linear search algorithm of Davies, Swann and Campey. The objective function was defined as the total annual manufacturing cost and the program was employed to find the optimum operating conditions for anyone of the chosen solvents. The final results demonstrated the following order of feasibility to purify wet-process acid: di-isopropyl ether, methylisobutyl ketone, n-amyl alcohol and tri-n-butyl phosphate.
An efficient, approximate path-following algorithm for elastic net based nonlinear spike enhancement
Resumo:
Unwanted spike noise in a digital signal is a common problem in digital filtering. However, sometimes the spikes are wanted and other, superimposed, signals are unwanted, and linear, time invariant (LTI) filtering is ineffective because the spikes are wideband - overlapping with independent noise in the frequency domain. So, no LTI filter can separate them, necessitating nonlinear filtering. However, there are applications in which the noise includes drift or smooth signals for which LTI filters are ideal. We describe a nonlinear filter formulated as the solution to an elastic net regularization problem, which attenuates band-limited signals and independent noise, while enhancing superimposed spikes. Making use of known analytic solutions a novel, approximate path-following algorithm is given that provides a good, filtered output with reduced computational effort by comparison to standard convex optimization methods. Accurate performance is shown on real, noisy electrophysiological recordings of neural spikes.
Resumo:
This book constitutes the refereed proceedings of the 14th International Conference on Parallel Problem Solving from Nature, PPSN 2016, held in Edinburgh, UK, in September 2016. The total of 93 revised full papers were carefully reviewed and selected from 224 submissions. The meeting began with four workshops which offered an ideal opportunity to explore specific topics in intelligent transportation Workshop, landscape-aware heuristic search, natural computing in scheduling and timetabling, and advances in multi-modal optimization. PPSN XIV also included sixteen free tutorials to give us all the opportunity to learn about new aspects: gray box optimization in theory; theory of evolutionary computation; graph-based and cartesian genetic programming; theory of parallel evolutionary algorithms; promoting diversity in evolutionary optimization: why and how; evolutionary multi-objective optimization; intelligent systems for smart cities; advances on multi-modal optimization; evolutionary computation in cryptography; evolutionary robotics - a practical guide to experiment with real hardware; evolutionary algorithms and hyper-heuristics; a bridge between optimization over manifolds and evolutionary computation; implementing evolutionary algorithms in the cloud; the attainment function approach to performance evaluation in EMO; runtime analysis of evolutionary algorithms: basic introduction; meta-model assisted (evolutionary) optimization. The papers are organized in topical sections on adaption, self-adaption and parameter tuning; differential evolution and swarm intelligence; dynamic, uncertain and constrained environments; genetic programming; multi-objective, many-objective and multi-level optimization; parallel algorithms and hardware issues; real-word applications and modeling; theory; diversity and landscape analysis.
Resumo:
Dynamic Optimization Problems (DOPs) have been widely studied using Evolutionary Algorithms (EAs). Yet, a clear and rigorous definition of DOPs is lacking in the Evolutionary Dynamic Optimization (EDO) community. In this paper, we propose a unified definition of DOPs based on the idea of multiple-decision-making discussed in the Reinforcement Learning (RL) community. We draw a connection between EDO and RL by arguing that both of them are studying DOPs according to our definition of DOPs. We point out that existing EDO or RL research has been mainly focused on some types of DOPs. A conceptualized benchmark problem, which is aimed at the systematic study of various DOPs, is then developed. Some interesting experimental studies on the benchmark reveal that EDO and RL methods are specialized in certain types of DOPs and more importantly new algorithms for DOPs can be developed by combining the strength of both EDO and RL methods.
Resumo:
Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.
Resumo:
The optimization of resource allocation in sparse networks with real variables is studied using methods of statistical physics. Efficient distributed algorithms are devised on the basis of insight gained from the analysis and are examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.
Resumo:
Inference and optimization of real-value edge variables in sparse graphs are studied using the Bethe approximation and replica method of statistical physics. Equilibrium states of general energy functions involving a large set of real edge variables that interact at the network nodes are obtained in various cases. When applied to the representative problem of network resource allocation, efficient distributed algorithms are also devised. Scaling properties with respect to the network connectivity and the resource availability are found, and links to probabilistic Bayesian approximation methods are established. Different cost measures are considered and algorithmic solutions in the various cases are devised and examined numerically. Simulation results are in full agreement with the theory. © 2007 The American Physical Society.
Resumo:
When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.
Resumo:
Protein crystallization has gained a new strategic and commercial relevance in the postgenomic era due to its pivotal role in structural genomics. Producing high quality crystals has always been a bottleneck to efficient structure determination, and this problem is becoming increasingly acute. This is especially true for challenging, therapeutically important proteins that typically do not form suitable crystals. The OptiCryst consortium has focused on relieving this bottleneck by making a concerted effort to improve the crystallization techniques usually employed, designing new crystallization tools, and applying such developments to the optimization of target protein crystals. In particular, the focus has been on the novel application of dual polarization interferometry (DPI) to detect suitable nucleation; the application of in situ dynamic light scattering (DLS) to monitor and analyze the process of crystallization; the use of UV-fluorescence to differentiate protein crystals from salt; the design of novel nucleants and seeding technologies; and the development of kits for capillary counterdiffusion and crystal growth in gels. The consortium collectively handled 60 new target proteins that had not been crystallized previously. From these, we generated 39 crystals with improved diffraction properties. Fourteen of these 39 were only obtainable using OptiCryst methods. For the remaining 25, OptiCryst methods were used in combination with standard crystallization techniques. Eighteen structures have already been solved (30% success rate), with several more in the pipeline.
Resumo:
Many automated negotiation models have been developed to solve the conflict in many distributed computational systems. However, the problem of finding win-win outcome in multiattribute negotiation has not been tackled well. To address this issue, based on an evolutionary method of multiobjective optimization, this paper presents a negotiation model that can find win-win solutions of multiple attributes, but needs not to reveal negotiating agents' private utility functions to their opponents or a third-party mediator. Moreover, we also equip our agents with a general type of utility functions of interdependent multiattributes, which captures human intuitions well. In addition, we also develop a novel time-dependent concession strategy model, which can help both sides find a final agreement among a set of win-win ones. Finally, lots of experiments confirm that our negotiation model outperforms the existing models developed recently. And the experiments also show our model is stable and efficient in finding fair win-win outcomes, which is seldom solved in the existing models. © 2012 Wiley Periodicals, Inc.
Resumo:
We present a parallel genetic algorithm for nding matrix multiplication algo-rithms. For 3 x 3 matrices our genetic algorithm successfully discovered algo-rithms requiring 23 multiplications, which are equivalent to the currently best known human-developed algorithms. We also studied the cases with less mul-tiplications and evaluated the suitability of the methods discovered. Although our evolutionary method did not reach the theoretical lower bound it led to an approximate solution for 22 multiplications.
Resumo:
In the last few years, significant advances have been made in understanding how a yeast cell responds to the stress of producing a recombinant protein, and how this information can be used to engineer improved host strains. The molecular biology of the expression vector, through the choice of promoter, tag and codon optimization of the target gene, is also a key determinant of a high-yielding protein production experiment. Recombinant Protein Production in Yeast: Methods and Protocols examines the process of preparation of expression vectors, transformation to generate high-yielding clones, optimization of experimental conditions to maximize yields, scale-up to bioreactor formats and disruption of yeast cells to enable the isolation of the recombinant protein prior to purification. Written in the highly successful Methods in Molecular Biology™ series format, chapters include introductions to their respective topics, lists of the necessary materials and reagents, step-by-step, readily reproducible laboratory protocols, and key tips on troubleshooting and avoiding known pitfalls.
Resumo:
Purpose: To optimize anterior eye fluorescein viewing and image capture. Design: Prospective experimental investigation. Methods: The spectral radiance of ten different models of slit-lamp blue luminance and the spectral transmission of three barrier filters were measured. Optimal clinical instillation of fluorescein was evaluated by a comparison of four different instillation methods of fluorescein into 10 subjects. Two methods used a floret, and two used minims of different concentration. The resulting fluorescence was evaluated for quenching effects and efficiency over time. Results: Spectral radiance of the blue illumination typically had an average peak at 460 nm. Comparison between three slit-lamps of the same model showed a similar spectral radiance distribution. Of the slit-lamps examined, 8.3% to 50.6% of the illumination output was optimized for >80% fluorescein excitation, and 1.2% to 23.5% of the illumination overlapped with that emitted by the fluorophore. The barrier filters had an average cut-off at 510 to 520 nm. Quenching was observed for all methods of fluorescein instillation. The moistened floret and the 1% minim reached a useful level of fluorescence in on average ∼20s (∼2.5× faster than the saturated floret and 2% minim) and this lasted for ∼160 seconds. Conclusions: Most slit-lamps' blue light and yellow barrier filters are not optimal for fluorescein viewing and capture. Instillation of fluorescein using a moistened floret or 1% minim seems most clinically appropriate as lower quantities and concentrations of fluorescein improve the efficiency of clinical examination. © 2006 Elsevier Inc. All rights reserved.
Resumo:
The inference and optimization in sparse graphs with real variables is studied using methods of statistical mechanics. Efficient distributed algorithms for the resource allocation problem are devised. Numerical simulations show excellent performance and full agreement with the theoretical results. © Springer-Verlag Berlin Heidelberg 2006.
Resumo:
We present various approaches to the optimization of optical fiber lines and discuss the ranges of validity of such methods. An effective scheme for upgrading of existing transmission lines using dispersion-management with optimization of the pre- and postcompensating fiber is examined. The theory and numerical methods are illustrated in application to the Upgrade of a specific installed Deutsche Telekom fiber line.