26 resultados para optimization algorithm
em Aston University Research Archive
Resumo:
This paper presents a surrogate-model-based optimization of a doubly-fed induction generator (DFIG) machine winding design for maximizing power yield. Based on site-specific wind profile data and the machine's previous operational performance, the DFIG's stator and rotor windings are optimized to match the maximum efficiency with operating conditions for rewinding purposes. The particle swarm optimization-based surrogate optimization techniques are used in conjunction with the finite element method to optimize the machine design utilizing the limited available information for the site-specific wind profile and generator operating conditions. A response surface method in the surrogate model is developed to formulate the design objectives and constraints. Besides, the machine tests and efficiency calculations follow IEEE standard 112-B. Numerical and experimental results validate the effectiveness of the proposed technologies.
Resumo:
Measurement and variation control of geometrical Key Characteristics (KCs), such as flatness and gap of joint faces, coaxiality of cabin sections, is the crucial issue in large components assembly from the aerospace industry. Aiming to control geometrical KCs and to attain the best fit of posture, an optimization algorithm based on KCs for large components assembly is proposed. This approach regards the posture best fit, which is a key activity in Measurement Aided Assembly (MAA), as a two-phase optimal problem. In the first phase, the global measurement coordinate system of digital model and shop floor is unified with minimum error based on singular value decomposition, and the current posture of components being assembly is optimally solved in terms of minimum variation of all reference points. In the second phase, the best posture of the movable component is optimally determined by minimizing multiple KCs' variation with the constraints that every KC respectively conforms to its product specification. The optimal models and the process procedures for these two-phase optimal problems based on Particle Swarm Optimization (PSO) are proposed. In each model, every posture to be calculated is modeled as a 6 dimensional particle (three movement and three rotation parameters). Finally, an example that two cabin sections of satellite mainframe structure are being assembled is selected to verify the effectiveness of the proposed approach, models and algorithms. The experiment result shows the approach is promising and will provide a foundation for further study and application. © 2013 The Authors.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
Resumo:
MOTIVATION: There is much interest in reducing the complexity inherent in the representation of the 20 standard amino acids within bioinformatics algorithms by developing a so-called reduced alphabet. Although there is no universally applicable residue grouping, there are numerous physiochemical criteria upon which one can base groupings. Local descriptors are a form of alignment-free analysis, the efficiency of which is dependent upon the correct selection of amino acid groupings. RESULTS: Within the context of G-protein coupled receptor (GPCR) classification, an optimization algorithm was developed, which was able to identify the most efficient grouping when used to generate local descriptors. The algorithm was inspired by the relatively new computational intelligence paradigm of artificial immune systems. A number of amino acid groupings produced by this algorithm were evaluated with respect to their ability to generate local descriptors capable of providing an accurate classification algorithm for GPCRs.
Resumo:
Integrated supplier selection and order allocation is an important decision for both designing and operating supply chains. This decision is often influenced by the concerned stakeholders, suppliers, plant operators and customers in different tiers. As firms continue to seek competitive advantage through supply chain design and operations they aim to create optimized supply chains. This calls for on one hand consideration of multiple conflicting criteria and on the other hand consideration of uncertainties of demand and supply. Although there are studies on supplier selection using advanced mathematical models to cover a stochastic approach, multiple criteria decision making techniques and multiple stakeholder requirements separately, according to authors' knowledge there is no work that integrates these three aspects in a common framework. This paper proposes an integrated method for dealing with such problems using a combined Analytic Hierarchy Process-Quality Function Deployment (AHP-QFD) and chance constrained optimization algorithm approach that selects appropriate suppliers and allocates orders optimally between them. The effectiveness of the proposed decision support system has been demonstrated through application and validation in the bioenergy industry.
Resumo:
A comprehensive coverage is crucial for communication, supply, and transportation networks, yet it is limited by the requirement of extensive infrastructure and heavy energy consumption. Here, we draw an analogy between spins in antiferromagnet and outlets in supply networks, and apply techniques from the studies of disordered systems to elucidate the effects of balancing the coverage and supply costs on the network behavior. A readily applicable, coverage optimization algorithm is derived. Simulation results show that magnetized and antiferromagnetic domains emerge and coexist to balance the need for coverage and energy saving. The scaling of parameters with system size agrees with the continuum approximation in two dimensions and the tree approximation in random graphs. Due to frustration caused by the competition between coverage and supply cost, a transition between easy and hard computation regimes is observed. We further suggest a local expansion approach to greatly simplify the message updates which shed light on simplifications in other problems. © 2014 American Physical Society.
Resumo:
Integration of the measurement activity into the production process is an essential rule in digital enterprise technology, especially for large volume product manufacturing, such as aerospace, shipbuilding, power generation and automotive industries. Measurement resource planning is a structured method of selecting and deploying necessary measurement resources to implement quality aims of product development. In this research, a new mapping approach for measurement resource planning is proposed. Firstly, quality aims are identified in the form of a number of specifications and engineering requirements of one quality characteristics (QCs) at a specific stage of product life cycle, and also measurement systems are classified according to the attribute of QCs. Secondly, a matrix mapping approach for measurement resource planning is outlined together with an optimization algorithm for combination between quality aims and measurement systems. Finally, the proposed methodology has been studied in shipbuilding to solve the problem of measurement resource planning, by which the measurement resources are deployed to satisfy all the quality aims. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Many practical routing algorithms are heuristic, adhoc and centralized, rendering generic and optimal path configurations difficult to obtain. Here we study a scenario whereby selected nodes in a given network communicate with fixed routers and employ statistical physics methods to obtain optimal routing solutions subject to a generic cost. A distributive message-passing algorithm capable of optimizing the path configuration in real instances is devised, based on the analytical derivation, and is greatly simplified by expanding the cost function around the optimized flow. Good algorithmic convergence is observed in most of the parameter regimes. By applying the algorithm, we study and compare the pros and cons of balanced traffic configurations to that of consolidated traffic, which provides important implications to practical communication and transportation networks. Interesting macroscopic phenomena are observed from the optimized states as an interplay between the communication density and the cost functions used. © 2013 IEEE.
Resumo:
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The algorithm resembles back-propagation in that an error function is minimized using a gradient-based method, but the optimization is carried out in the hidden part of state space either instead of, or in addition to weight space. Computational results are presented for some simple dynamical training problems, one of which requires response to a signal 100 time steps in the past.
Resumo:
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The method resembles back-propagation in that it is a least-squares, gradient-based optimization method, but the optimization is carried out in the hidden part of state space instead of weight space. A straightforward adaptation of this method to feedforward networks offers an alternative to training by conventional back-propagation. Computational results are presented for simple dynamical training problems, with varied success. The failures appear to arise when the method converges to a chaotic attractor. A patch-up for this problem is proposed. The patch-up involves a technique for implementing inequality constraints which may be of interest in its own right.
Resumo:
There is currently considerable interest in developing general non-linear density models based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying `causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general non-linear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.
Resumo:
Physical distribution plays an imporant role in contemporary logistics management. Both satisfaction level of of customer and competitiveness of company can be enhanced if the distribution problem is solved optimally. The multi-depot vehicle routing problem (MDVRP) belongs to a practical logistics distribution problem, which consists of three critical issues: customer assignment, customer routing, and vehicle sequencing. According to the literatures, the solution approaches for the MDVRP are not satisfactory because some unrealistic assumptions were made on the first sub-problem of the MDVRP, ot the customer assignment problem. To refine the approaches, the focus of this paper is confined to this problem only. This paper formulates the customer assignment problem as a minimax-type integer linear programming model with the objective of minimizing the cycle time of the depots where setup times are explicitly considered. Since the model is proven to be MP-complete, a genetic algorithm is developed for solving the problem. The efficiency and effectiveness of the genetic algorithm are illustrated by a numerical example.
Resumo:
We investigate a digital back-propagation simplification method to enable computationally-efficient digital nonlinearity compensation for a coherently-detected 112 Gb/s polarization multiplexed quadrature phase shifted keying transmission over a 1,600 km link (20x80km) with no inline compensation. Through numerical simulation, we report up to 80% reduction in required back-propagation steps to perform nonlinear compensation, in comparison to the standard back-propagation algorithm. This method takes into account the correlation between adjacent symbols at a given instant using a weighted-average approach, and optimization of the position of nonlinear compensator stage to enable practical digital back-propagation.
Resumo:
We numerically investigate the combination of full-field detection and feed-forward equalizer (FFE) for adaptive chromatic dispersion compensation up to 2160 km in a 10 Gbit/s on-off keyed optical transmission system. The technique, with respect to earlier reports, incorporates several important implementation modules, including the algorithm for adaptive equalization of the gain imbalance between the two receiver chains, compensation of phase misalignment of the asymmetric Mach-Zehnder interferometer, and simplified implementation of field calculation. We also show that in addition to enabling fast adaptation and simplification of field calculation, full-field FFE exhibits enhanced tolerance to the sampling phase misalignment and reduced sampling rate when compared to the full-field implementation using a dispersive transmission line.