930 resultados para Local optimization algorithms
Resumo:
In this article, a methodology is used for the simultaneous determination of the effective diffusivity and the convective mass transfer coefficient in porous solids, which can be considered as an infinite cylinder during drying. Two models are used for optimization and drying simulation: model 1 (constant volume and diffusivity, with equilibrium boundary condition), and model 2 (constant volume and diffusivity with convective boundary condition). Optimization algorithms based on the inverse method were coupled to the analytical solutions, and these solutions can be adjusted to experimental data of the drying kinetics. An application of optimization methodology was made to describe the drying kinetics of whole bananas, using experimental data available in the literature. The statistical indicators enable to affirm that the solution of diffusion equation with convective boundary condition generates results superior than those with the equilibrium boundary condition.
Resumo:
Afin d'enrichir les données de corpus bilingues parallèles, il peut être judicieux de travailler avec des corpus dits comparables. En effet dans ce type de corpus, même si les documents dans la langue cible ne sont pas l'exacte traduction de ceux dans la langue source, on peut y retrouver des mots ou des phrases en relation de traduction. L'encyclopédie libre Wikipédia constitue un corpus comparable multilingue de plusieurs millions de documents. Notre travail consiste à trouver une méthode générale et endogène permettant d'extraire un maximum de phrases parallèles. Nous travaillons avec le couple de langues français-anglais mais notre méthode, qui n'utilise aucune ressource bilingue extérieure, peut s'appliquer à tout autre couple de langues. Elle se décompose en deux étapes. La première consiste à détecter les paires d’articles qui ont le plus de chance de contenir des traductions. Nous utilisons pour cela un réseau de neurones entraîné sur un petit ensemble de données constitué d'articles alignés au niveau des phrases. La deuxième étape effectue la sélection des paires de phrases grâce à un autre réseau de neurones dont les sorties sont alors réinterprétées par un algorithme d'optimisation combinatoire et une heuristique d'extension. L'ajout des quelques 560~000 paires de phrases extraites de Wikipédia au corpus d'entraînement d'un système de traduction automatique statistique de référence permet d'améliorer la qualité des traductions produites. Nous mettons les données alignées et le corpus extrait à la disposition de la communauté scientifique.
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
Resumo:
A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.
Resumo:
The problem of planning multiple vehicles deals with the design of an effective algorithm that can cause multiple autonomous vehicles on the road to communicate and generate a collaborative optimal travel plan. Our modelling of the problem considers vehicles to vary greatly in terms of both size and speed, which makes it suboptimal to have a faster vehicle follow a slower vehicle or for vehicles to drive with predefined speed lanes. It is essential to have a fast planning algorithm whilst still being probabilistically complete. The Rapidly Exploring Random Trees (RRT) algorithm developed and reported on here uses a problem specific coordination axis, a local optimization algorithm, priority based coordination, and a module for deciding travel speeds. Vehicles are assumed to remain in their current relative position laterally on the road unless otherwise instructed. Experimental results presented here show regular driving behaviours, namely vehicle following, overtaking, and complex obstacle avoidance. The ability to showcase complex behaviours in the absence of speed lanes is characteristic of the solution developed.
Resumo:
The regimen of environmental flows (EF) must be included as terms of environmental demand in the management of water resources. Even though there are numerous methods for the computation of EF, the criteria applied at different steps in the calculation process are quite subjective whereas the results are fixed values that must be meet by water planners. This study presents a friendly-user tool for the assessment of the probability of compliance of a certain EF scenario with the natural regimen in a semiarid area in southern Spain. 250 replications of a 25-yr period of different hydrological variables (rainfall, minimum and maximum flows, ...) were obtained at the study site from the combination of Monte Carlo technique and local hydrological relationships. Several assumptions are made such as the independence of annual rainfall from year to year and the variability of occurrence of the meteorological agents, mainly precipitation as the main source of uncertainty. Inputs to the tool are easily selected from a first menu and comprise measured rainfall data, EF values and the hydrological relationships for at least a 20-yr period. The outputs are the probabilities of compliance of the different components of the EF for the study period. From this, local optimization can be applied to establish EF components with a certain level of compliance in the study period. Different options for graphic output and analysis of results are included in terms of graphs and tables in several formats. This methodology turned out to be a useful tool for the implementation of an uncertainty analysis within the scope of environmental flows in water management and allowed the simulation of the impacts of several water resource development scenarios in the study site.
Resumo:
The bidimensional periodic structures called frequency selective surfaces have been well investigated because of their filtering properties. Similar to the filters that work at the traditional radiofrequency band, such structures can behave as band-stop or pass-band filters, depending on the elements of the array (patch or aperture, respectively) and can be used for a variety of applications, such as: radomes, dichroic reflectors, waveguide filters, artificial magnetic conductors, microwave absorbers etc. To provide high-performance filtering properties at microwave bands, electromagnetic engineers have investigated various types of periodic structures: reconfigurable frequency selective screens, multilayered selective filters, as well as periodic arrays printed on anisotropic dielectric substrates and composed by fractal elements. In general, there is no closed form solution directly from a given desired frequency response to a corresponding device; thus, the analysis of its scattering characteristics requires the application of rigorous full-wave techniques. Besides that, due to the computational complexity of using a full-wave simulator to evaluate the frequency selective surface scattering variables, many electromagnetic engineers still use trial-and-error process until to achieve a given design criterion. As this procedure is very laborious and human dependent, optimization techniques are required to design practical periodic structures with desired filter specifications. Some authors have been employed neural networks and natural optimization algorithms, such as the genetic algorithms and the particle swarm optimization for the frequency selective surface design and optimization. This work has as objective the accomplishment of a rigorous study about the electromagnetic behavior of the periodic structures, enabling the design of efficient devices applied to microwave band. For this, artificial neural networks are used together with natural optimization techniques, allowing the accurate and efficient investigation of various types of frequency selective surfaces, in a simple and fast manner, becoming a powerful tool for the design and optimization of such structures
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables
Resumo:
The separation methods are reduced applications as a result of the operational costs, the low output and the long time to separate the uids. But, these treatment methods are important because of the need for extraction of unwanted contaminants in the oil production. The water and the concentration of oil in water should be minimal (around 40 to 20 ppm) in order to take it to the sea. Because of the need of primary treatment, the objective of this project is to study and implement algorithms for identification of polynomial NARX (Nonlinear Auto-Regressive with Exogenous Input) models in closed loop, implement a structural identification, and compare strategies using PI control and updated on-line NARX predictive models on a combination of three-phase separator in series with three hydro cyclones batteries. The main goal of this project is to: obtain an optimized process of phase separation that will regulate the system, even in the presence of oil gushes; Show that it is possible to get optimized tunings for controllers analyzing the mesh as a whole, and evaluate and compare the strategies of PI and predictive control applied to the process. To accomplish these goals a simulator was used to represent the three phase separator and hydro cyclones. Algorithms were developed for system identification (NARX) using RLS(Recursive Least Square), along with methods for structure models detection. Predictive Control Algorithms were also implemented with NARX model updated on-line, and optimization algorithms using PSO (Particle Swarm Optimization). This project ends with a comparison of results obtained from the use of PI and predictive controllers (both with optimal state through the algorithm of cloud particles) in the simulated system. Thus, concluding that the performed optimizations make the system less sensitive to external perturbations and when optimized, the two controllers show similar results with the assessment of predictive control somewhat less sensitive to disturbances
Resumo:
Este artigo apresenta uma breve revisão de alguns dos mais recentes métodos bioinspirados baseados no comportamento de populações para o desenvolvimento de técnicas de solução de problemas. As metaheurísticas tratadas aqui correspondem às estratégias de otimização por colônia de formigas, otimização por enxame de partículas, algoritmo shuffled frog-leaping, coleta de alimentos por bactérias e colônia de abelhas. Os princípios biológicos que motivaram o desenvolvimento de cada uma dessas estratégias, assim como seus respectivos algoritmos computacionais, são introduzidos. Duas aplicações diferentes foram conduzidas para exemplificar o desempenho de tais algoritmos. A finalidade é enfatizar perspectivas de aplicação destas abordagens em diferentes problemas da área de engenharia.
Resumo:
Linear Matrix Inequalities (LMIs) is a powerful too] that has been used in many areas ranging from control engineering to system identification and structural design. There are many factors that make LMI appealing. One is the fact that a lot of design specifications and constrains can be formulated as LMIs [1]. Once formulated in terms of LMIs a problem can be solved efficiently by convex optimization algorithms. The basic idea of the LMI method is to formulate a given problem as an optimization problem with linear objective function and linear matrix inequalities constrains. An intelligent structure involves distributed sensors and actuators and a control law to apply localized actions, in order to minimize or reduce the response at selected conditions. The objective of this work is to implement techniques of control based on LMIs applied to smart structures.
Resumo:
This work presents an algorithm for the security control of electric power systems using control actions like generation reallocation, determined by sensitivity analysis (linearized model) and optimization by neural networks. The model is developed taking into account the dynamic network aspects. The preventive control methodology is developed by means of sensitivity analysis of the security margin related with the mechanical power of the system synchronous machines. The reallocation power in each machine is determined using neural networks. The neural network used in this work is of Hopfield type. These networks are dedicated electric circuits which simulate the constraint set and the objective function of an optimization problem. The advantage of using these networks is the higher speed in getting the solutions when compared to conventional optimization algorithms due to the great convergence rate of the process and the facility of the method parallelization. Then, the objectives are: formulate and investigate these networks implementations in determining. The generation reallocation in digital computers. Aiming to illustrate the proposed methodology an application considering a multi-machine system is presented.
Resumo:
The study of algorithms for active vibrations control in flexible structures became an area of enormous interest, mainly due to the countless demands of an optimal performance of mechanical systems as aircraft, aerospace and automotive structures. Smart structures, formed by a structure base, coupled with piezoelectric actuators and sensor are capable to guarantee the conditions demanded through the application of several types of controllers. The actuator/sensor materials are composed by piezoelectric ceramic (PZT - Lead Zirconate Titanate), commonly used as distributed actuators, and piezoelectric plastic films (PVDF-PolyVinyliDeno Floride), highly indicated for distributed sensors. The design process of such system encompasses three main phases: structural design; optimal placement of sensor/actuator (PVDF and PZT); and controller design. Consequently, for optimal design purposes, the structure, the sensor/actuator placement and the controller have to be considered simultaneously. This article addresses the optimal placement of actuators and sensors for design of controller for vibration attenuation in a flexible plate. Techniques involving linear matrix inequalities (LMI) to solve the Riccati's equation are used. The controller's gain is calculated using the linear quadratic regulator (LQR). The major advantage of LMI design is to enable specifications such as stability degree requirements, decay rate, input force limitation in the actuators and output peak bounder. It is also possible to assume that the model parameters involve uncertainties. LMI is a very useful tool for problems with constraints, where the parameters vary in a range of values. Once formulated in terms of LMI a problem can be solved efficiently by convex optimization algorithms.
Resumo:
Image restoration is a research field that attempts to recover a blurred and noisy image. Since it can be modeled as a linear system, we propose in this paper to use the meta-heuristics optimization algorithm Harmony Search (HS) to find out near-optimal solutions in a Projections Onto Convex Sets-based formulation to solve this problem. The experiments using HS and four of its variants have shown that we can obtain near-optimal and faster restored images than other evolutionary optimization approach. © 2013 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)