872 resultados para Particle swarm optimization algorithm PSO
Resumo:
Biogeography is the science that studies the geographical distribution and the migration of species in an ecosystem. Biogeography-based optimization (BBO) is a recently developed global optimization algorithm as a generalization of biogeography to evolutionary algorithm and has shown its ability to solve complex optimization problems. BBO employs a migration operator to share information between the problem solutions. The problem solutions are identified as habitat, and the sharing of features is called migration. In this paper, a multiobjective BBO, combined with a predator-prey (PPBBO) approach, is proposed and validated in the constrained design of a brushless dc wheel motor. The results demonstrated that the proposed PPBBO approach converged to promising solutions in terms of quality and dominance when compared with the classical BBO in a multiobjective version.
Resumo:
Small scale fluid flow systems have been studied for various applications, such as chemical reagent dosages and cooling devices of compact electronic components. This work proposes to present the complete cycle development of an optimized heat sink designed by using Topology Optimization Method (TOM) for best performance, including minimization of pressure drop in fluid flow and maximization of heat dissipation effects, aiming small scale applications. The TOM is applied to a domain, to obtain an optimized channel topology, according to a given multi-objective function that combines pressure drop minimization and heat transfer maximization. Stokes flow hypothesis is adopted. Moreover, both conduction and forced convection effects are included in the steady-state heat transfer model. The topology optimization procedure combines the Finite Element Method (to carry out the physical analysis) with Sequential Linear Programming (as the optimization algorithm). Two-dimensional topology optimization results of channel layouts obtained for a heat sink design are presented as example to illustrate the design methodology. 3D computational simulations and prototype manufacturing have been carried out to validate the proposed design methodology.
Resumo:
Limit equilibrium is a common method used to analyze the stability of a slope, and minimization of the factor of safety or identification of critical slip surfaces is a classical geotechnical problem in the context of limit equilibrium methods for slope stability analyses. A mutative scale chaos optimization algorithm is employed in this study to locate the noncircular critical slip surface with Spencers method being employed to compute the factor of safety. Four examples from the literatureone homogeneous slope and three layered slopesare employed to identify the efficiency and accuracy of this approach. Results indicate that the algorithm is flexible and that although it does not generally provide the minimum FS, it provides results that are close to the minimum, an improvement over other solutions proposed in the literature and with small relative errors with respect to other minimum factor of safety (FS) values reported in the literature.
Resumo:
Pster presentado en Escape 22, European Symposium on Computer Aided Process Engineering, University College London, UK, 17-20 June 2012.
Resumo:
In this paper we present a study of the computational cost of the GNG3D algorithm for mesh optimization. This algorithm has been implemented taking as a basis a new method which is based on neural networks and consists on two differentiated phases: an optimization phase and a reconstruction phase. The optimization phase is developed applying an optimization algorithm based on the Growing Neural Gas model, which constitutes an unsupervised incremental clustering algorithm. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The computational cost of both phases is calculated, showing some examples.
Resumo:
The optimization of chemical processes where the flowsheet topology is not kept fixed is a challenging discrete-continuous optimization problem. Usually, this task has been performed through equation based models. This approach presents several problems, as tedious and complicated component properties estimation or the handling of huge problems (with thousands of equations and variables). We propose a GDP approach as an alternative to the MINLP models coupled with a flowsheet program. The novelty of this approach relies on using a commercial modular process simulator where the superstructure is drawn directly on the graphical use interface of the simulator. This methodology takes advantage of modular process simulators (specially tailored numerical methods, reliability, and robustness) and the flexibility of the GDP formulation for the modeling and solution. The optimization tool proposed is successfully applied to the synthesis of a methanol plant where different alternatives are available for the streams, equipment and process conditions.
Resumo:
There are many models in the literature that have been proposed in the last decades aimed at assessing the reliability, availability and maintainability (RAM) of safety equipment, many of them with a focus on their use to assess the risk level of a technological system or to search for appropriate design and/or surveillance and maintenance policies in order to assure that an optimum level of RAM of safety systems is kept during all the plant operational life. This paper proposes a new approach for RAM modelling that accounts for equipment ageing and maintenance and testing effectiveness of equipment consisting of multiple items in an integrated manner. This model is then used to perform the simultaneous optimization of testing and maintenance for ageing equipment consisting of multiple items. An example of application is provided, which considers a simplified High Pressure Injection System (HPIS) of a typical Power Water Reactor (PWR). Basically, this system consists of motor driven pumps (MDP) and motor operated valves (MOV), where both types of components consists of two items each. These components present different failure and cause modes and behaviours, and they also undertake complex test and maintenance activities depending on the item involved. The results of the example of application demonstrate that the optimization algorithm provide the best solutions when the optimization problem is formulated and solved considering full flexibility in the implementation of testing and maintenance activities taking part of such an integrated RAM model.
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects and by markets, are applied to a constrained problem of distributed task allocation. The efficiency (average number of tasks performed), the flexibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved efficiency and robustness. We employ nature inspired particle swarm optimisation to obtain optimised parameters for all algorithms in a range of representative environments. Although results are obtained for large population sizes to avoid finite size effects, the influence of population size on the performance is also analysed. From a theoretical point of view, we analyse the causes of efficiency loss, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
2000 Mathematics Subject Classification: 62J05, 62G35
Degradao fotocataltica oxidativa do fenol utilizando carvo obtido da pirlise de diferentes biomassas
Resumo:
The modern industrial progress has been contaminating water with phenolic compounds. These are toxic and carcinogenic substances and it is essential to reduce its concentration in water to a tolerable one, determined by CONAMA, in order to protect the living organisms. In this context, this work focuses on the treatment and characterization of catalysts derived from the bio-coal, by-product of biomass pyrolysis (avels and wood dust) as well as its evaluation in the phenol photocatalytic degradation reaction. Assays were carried out in a slurry bed reactor, which enables instantaneous measurements of temperature, pH and dissolved oxygen. The experiments were performed in the following operating conditions: temperature of 50 C, oxygen flow equals to 410 mL min-1 , volume of reagent solution equals to 3.2 L, 400 W UV lamp, at 1 atm pressure, with a 2 hours run. The parameters evaluated were the pH (3.0, 6.9 and 10.7), initial concentration of commercial phenol (250, 500 and 1000 ppm), catalyst concentration (0, 1, 2, and 3 g L-1 ), nature of the catalyst (activated avels carbon washed with dichloromethane, CAADCM, and CMADCM, activated dust wood carbon washed with dichloromethane). The results of XRF, XRD and BET confirmed the presence of iron and potassium in satisfactory amounts to the CAADCM catalyst and on a reduced amount to CMADCM catalyst, and also the surface area increase of the materials after a chemical and physical activation. The phenol degradation curves indicate that pH has a significant effect on the phenol conversion, showing better results for lowers pH. The optimum concentration of catalyst is observed equals to 1 g L-1 , and the increase of the initial phenol concentration exerts a negative influence in the reaction execution. It was also observed positive effect of the presence of iron and potassium in the catalyst structure: betters conversions were observed for tests conducted with the catalyst CAADCM compared to CMADCM catalyst under the same conditions. The higher conversion was achieved for the test carried out at acid pH (3.0) with an initial concentration of phenol at 250 ppm catalyst in the presence of CAADCM at 1 g L-1 . The liquid samples taken every 15 minutes were analyzed by liquid chromatography identifying and quantifying hydroquinone, p-benzoquinone, catechol and maleic acid. Finally, a reaction mechanism is proposed, cogitating the phenol is transformed into the homogeneous phase and the others react on the catalyst surface. Applying the model of Langmuir-Hinshelwood along with a mass balance it was obtained a system of differential equations that were solved using the Runge-Kutta 4th order method associated with a optimization routine called SWARM (particle swarm) aiming to minimize the least square objective function for obtaining the kinetic and adsorption parameters. Related to the kinetic rate constant, it was obtained a magnitude of 10-3 for the phenol degradation, 10-4 to 10-2 for forming the acids, 10-6 to 10-9 for the mineralization of quinones (hydroquinone, p-benzoquinone and catechol), 10-3 to 10-2 for the mineralization of acids.
Resumo:
<p>Purpose: To investigate the effect of incorporating a beam spreading parameter in a beam angle optimization algorithm and to evaluate its efficacy for creating coplanar IMRT lung plans in conjunction with machine learning generated dose objectives.</p><p>Methods: Fifteen anonymized patient cases were each re-planned with ten values over the range of the beam spreading parameter, k, and analyzed with a Wilcoxon signed-rank test to determine whether any particular value resulted in significant improvement over the initially treated plan created by a trained dosimetrist. Dose constraints were generated by a machine learning algorithm and kept constant for each case across all k values. Parameters investigated for potential improvement included mean lung dose, V20 lung, V40 heart, 80% conformity index, and 90% conformity index.</p><p>Results: With a confidence level of 5%, treatment plans created with this method resulted in significantly better conformity indices. Dose coverage to the PTV was improved by an average of 12% over the initial plans. At the same time, these treatment plans showed no significant difference in mean lung dose, V20 lung, or V40 heart when compared to the initial plans; however, it should be noted that these results could be influenced by the small sample size of patient cases.</p><p>Conclusions: The beam angle optimization algorithm, with the inclusion of the beam spreading parameter k, increases the dose conformity of the automatically generated treatment plans over that of the initial plans without adversely affecting the dose to organs at risk. This parameter can be varied according to physician preference in order to control the tradeoff between dose conformity and OAR sparing without compromising the integrity of the plan.</p>
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
This study investigates topology optimization of energy absorbing structures in which material damage is accounted for in the optimization process. The optimization objective is to design the lightest structures that are able to absorb the required mechanical energy. A structural continuity constraint check is introduced that is able to detect when no feasible load path remains in the finite element model, usually as a result of large scale fracture. This assures that designs do not fail when loaded under the conditions prescribed in the design requirements. This continuity constraint check is automated and requires no intervention from the analyst once the optimization process is initiated. Consequently, the optimization algorithm proceeds towards evolving an energy absorbing structure with the minimum structural mass that is not susceptible to global structural failure. A method is also introduced to determine when the optimization process should halt. The method identifies when the optimization method has plateaued and is no longer likely to provide improved designs if continued for further iterations. This provides the designer with a rational method to determine the necessary time to run the optimization and avoid wasting computational resourceson unnecessary iterations. A case study is presented to demonstrate the use of this method.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables. <br/>For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.<br/>This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the Parametric Design Velocity is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.<br/>The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (real value) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.<br/>The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.<br/>
Resumo:
This report describes a tool for global optimization that implements the Differential Evolution optimization algorithm as a new Excel add-in. The tool takes a step beyond Excels Solver add-in, because Solver often returns a local minimum, that is, a minimum that is less than or equal to nearby points, while Differential Evolution solves for the global minimum, which includes all feasible points. Despite complex underlying mathematics, the tool is relatively easy to use, and can be applied to practical optimization problems, such as establishing pricing and awards in a hotel loyalty program. The report demonstrates an example of how to develop an optimum approach to that problem.