872 resultados para Particle swarm optimization algorithm PSO
Resumo:
Estudiar el cambio global de origen antropogénico en los ecosistemas mundiales, y sus efectos sobre los mismos, es y será uno de los principales retos de la ecologÃa del siglo XXI. Los ecosistemas forestales españoles ya se encuentran actualmente limitados por el estrés hÃdrico. Esta limitación se verá agravada por los efectos del cambio climático debido tanto a una reducción del agua disponible como a un incremento de la demanda evaporativa. Una gestión forestal adecuada puede incrementar la resiliencia de los ecosistemas forestales mediterráneos al cambio climático. Los modelos de procesos ecofisiológicos como GOTILWA+ son herramientas muy potentes a la hora de proyectar los efectos del cambio climático sobre los ecosistemas forestales, asimismo como evaluar la gestión forestal. GOTILWA+ incluye un potente motor de optimización de la gestión forestal basado en el "Particle Swarm Algorithm" (PSO) -, que permite proyectar la gestión óptima en función de las variables ambientales tanto climáticas como estructurales y de los objetivos de gestión. Una gestión adaptativa al cambio climático será imprescindible para combatir los impactos negativos de este sobre los bosques españoles. En este artÃculo se presentan tres ejemplos de aplicación del modelo GOTILWA+: en el primero se estudia la respuesta de los hayedos (Fagus sylvatica L.) españoles a distintos escenarios de cambio climático. En el segundo se evalúan distintos itinerarios de gestión de pino carrasco (Pinus halepensis Mill.) en función de distintos objetivos de gestión. En el tercero, se aplica el PSO en un rodal de pino silvestre (Pinus sylvestris L.) para obtener la gestión óptima del rodal. Se concluye que, si bien el cambio climático supondrá severas constricciones sobre los ecosistemas forestales españoles, una gestión adaptativa permitirá en parte mitigar dichos impactos [...].
Resumo:
An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
A novel common Tabu algorithm for global optimizations of engineering problems is presented. The robustness and efficiency of the presented method are evaluated by using standard mathematical functions and hy solving a practical engineering problem. The numerical results show that the proposed method is (i) superior to the conventional Tabu search algorithm in robustness, and (ii) superior to the simulated annealing algorithm in efficiency. (C) 2001 Elsevier B.V. B.V. All rights reserved.
Resumo:
Swarm colonies reproduce social habits. Working together in a group to reach a predefined goal is a social behaviour occurring in nature. Linear optimization problems have been approached by different techniques based on natural models. In particular, Particles Swarm optimization is a meta-heuristic search technique that has proven to be effective when dealing with complex optimization problems. This paper presents and develops a new method based on different penalties strategies to solve complex problems. It focuses on the training process of the neural networks, the constraints and the election of the parameters to ensure successful results and to avoid the most common obstacles when searching optimal solutions.
Resumo:
Paper submitted to AIChE 2012 Annual Meeting: Energy Efficiency by Process Intensification, Pittsburgh, PA, October 28-November 2, 2012.
Resumo:
We present an extension of the logic outer-approximation algorithm for dealing with disjunctive discrete-continuous optimal control problems whose dynamic behavior is modeled in terms of differential-algebraic equations. Although the proposed algorithm can be applied to a wide variety of discrete-continuous optimal control problems, we are mainly interested in problems where disjunctions are also present. Disjunctions are included to take into account only certain parts of the underlying model which become relevant under some processing conditions. By doing so the numerical robustness of the optimization algorithm improves since those parts of the model that are not active are discarded leading to a reduced size problem and avoiding potential model singularities. We test the proposed algorithm using three examples of different complex dynamic behavior. In all the case studies the number of iterations and the computational effort required to obtain the optimal solutions is modest and the solutions are relatively easy to find.
Resumo:
Water-alternating-gas (WAG) is an enhanced oil recovery method combining the improved macroscopic sweep of water flooding with the improved microscopic displacement of gas injection. The optimal design of the WAG parameters is usually based on numerical reservoir simulation via trial and error, limited by the reservoir engineer’s availability. Employing optimisation techniques can guide the simulation runs and reduce the number of function evaluations. In this study, robust evolutionary algorithms are utilized to optimise hydrocarbon WAG performance in the E-segment of the Norne field. The first objective function is selected to be the net present value (NPV) and two global semi-random search strategies, a genetic algorithm (GA) and particle swarm optimisation (PSO) are tested on different case studies with different numbers of controlling variables which are sampled from the set of water and gas injection rates, bottom-hole pressures of the oil production wells, cycle ratio, cycle time, the composition of the injected hydrocarbon gas (miscible/immiscible WAG) and the total WAG period. In progressive experiments, the number of decision-making variables is increased, increasing the problem complexity while potentially improving the efficacy of the WAG process. The second objective function is selected to be the incremental recovery factor (IRF) within a fixed total WAG simulation time and it is optimised using the same optimisation algorithms. The results from the two optimisation techniques are analyzed and their performance, convergence speed and the quality of the optimal solutions found by the algorithms in multiple trials are compared for each experiment. The distinctions between the optimal WAG parameters resulting from NPV and oil recovery optimisation are also examined. This is the first known work optimising over this complete set of WAG variables. The first use of PSO to optimise a WAG project at the field scale is also illustrated. Compared to the reference cases, the best overall values of the objective functions found by GA and PSO were 13.8% and 14.2% higher, respectively, if NPV is optimised over all the above variables, and 14.2% and 16.2% higher, respectively, if IRF is optimised.
Resumo:
Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.
Resumo:
This thesis focuses on finding the optimum block cutting dimensions in terms of the environmental and economic factors by using a 3D algorithm for a limestone quarry in Foggia, Italy. The environmental concerns of quarrying operations are mainly: energy consumption, material waste, and pollution. The main economic concerns are the block recovery, the selling prices, and the production costs. Fractures adversely affect the block recovery ratio. With a fracture model, block production can be optimized. In this research, the waste volume produced by quarrying was minimised to increase the recovery ratio and ensure economic benefits. SlabCutOpt is a software developed at DICAM–University of Bologna for block cutting optimization which tests different cutting angles on the x-y-z planes to offer up alternative cutting methods. The program tests several block sizes and outputs the optimal result for each entry. By using SlabCutOpt, ten different block dimensions were analysed, the results indicated the maximum number of non-intersecting blocks for each dimension. After analysing the outputs, the block named number 1 with the dimensions ‘1mx1mx1m’ had the highest recovery ratio as 43% and the total Relative Money Value (RMV) with a value of 22829. Dimension number 1, also had the lowest waste volume, with a value of 3953.25 m3, for the total bench. For cutting the total bench volume of 6932.25m3, the diamond wire cutter had the lowest dust emission values for the block with the dimension ‘2mx2mx2m’, with a value of 24m3. When compared with the Eco-Label standards, block dimensions having surface area values lower than 15m2, were found to fit the natural resource waste criteria of the label, as the threshold required 25% of minimum recovery [1]. Due to the relativity of production costs, together with the Eco-Label threshold, the research recommends the selection of the blocks with a surface area value between 6m2 and 14m2.
Resumo:
Sensors and actuators based on piezoelectric plates have shown increasing demand in the field of smart structures, including the development of actuators for cooling and fluid-pumping applications and transducers for novel energy-harvesting devices. This project involves the development of a topology optimization formulation for dynamic design of piezoelectric laminated plates aiming at piezoelectric sensors, actuators and energy-harvesting applications. It distributes piezoelectric material over a metallic plate in order to achieve a desired dynamic behavior with specified resonance frequencies, modes, and enhanced electromechanical coupling factor (EMCC). The finite element employs a piezoelectric plate based on the MITC formulation, which is reliable, efficient and avoids the shear locking problem. The topology optimization formulation is based on the PEMAP-P model combined with the RAMP model, where the design variables are the pseudo-densities that describe the amount of piezoelectric material at each finite element and its polarization sign. The design problem formulated aims at designing simultaneously an eigenshape, i.e., maximizing and minimizing vibration amplitudes at certain points of the structure in a given eigenmode, while tuning the eigenvalue to a desired value and also maximizing its EMCC, so that the energy conversion is maximized for that mode. The optimization problem is solved by using sequential linear programming. Through this formulation, a design with enhancing energy conversion in the low-frequency spectrum is obtained, by minimizing a set of first eigenvalues, enhancing their corresponding eigenshapes while maximizing their EMCCs, which can be considered an approach to the design of energy-harvesting devices. The implementation of the topology optimization algorithm and some results are presented to illustrate the method.
Resumo:
Load cells are used extensively in engineering fields. This paper describes a novel structural optimization method for single- and multi-axis load cell structures. First, we briefly explain the topology optimization method that uses the solid isotropic material with penalization (SIMP) method. Next, we clarify the mechanical requirements and design specifications of the single- and multi-axis load cell structures, which are formulated as an objective function. In the case of multi-axis load cell structures, a methodology based on singular value decomposition is used. The sensitivities of the objective function with respect to the design variables are then formulated. On the basis of these formulations, an optimization algorithm is constructed using finite element methods and the method of moving asymptotes (MMA). Finally, we examine the characteristics of the optimization formulations and the resultant optimal configurations. We confirm the usefulness of our proposed methodology for the optimization of single- and multi-axis load cell structures.
Resumo:
Piezoresistive materials, materials whose resistivity properties change when subjected to mechanical stresses, are widely utilized in many industries as sensors, including pressure sensors, accelerometers, inclinometers, and load cells. Basic piezoresistive sensors consist of piezoresistive devices bonded to a flexible structure, such as a cantilever or a membrane, where the flexible structure transmits pressure, force, or inertial force due to acceleration, thereby causing a stress that changes the resistivity of the piezoresistive devices. By applying a voltage to a piezoresistive device, its resistivity can be measured and correlated with the amplitude of an applied pressure or force. The performance of a piezoresistive sensor is closely related to the design of its flexible structure. In this research, we propose a generic topology optimization formulation for the design of piezoresistive sensors where the primary aim is high response. First, the concept of topology optimization is briefly discussed. Next, design requirements are clarified, and corresponding objective functions and the optimization problem are formulated. An optimization algorithm is constructed based on these formulations. Finally, several design examples of piezoresistive sensors are presented to confirm the usefulness of the proposed method.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
The operation of power systems in a Smart Grid (SG) context brings new opportunities to consumers as active players, in order to fully reach the SG advantages. In this context, concepts as smart homes or smart buildings are promising approaches to perform the optimization of the consumption, while reducing the electricity costs. This paper proposes an intelligent methodology to support the consumption optimization of an industrial consumer, which has a Combined Heat and Power (CHP) facility. A SCADA (Supervisory Control and Data Acquisition) system developed by the authors is used to support the implementation of the proposed methodology. An optimization algorithm implemented in the system in order to perform the determination of the optimal consumption and CHP levels in each instant, according to the Demand Response (DR) opportunities. The paper includes a case study with several scenarios of consumption and heat demand in the context of a DR event which specifies a maximum demand level for the consumer.