903 resultados para Polynomial-time algorithm
H-infinity control design for time-delay linear systems: a rational transfer function based approach
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The present work presents an algorithm proposal, which aims for controlling and improving idle time to be applied in oil production wells equipped with beam pump. The algorithm was totally designed based on existing papers and data acquired from two Potiguar Basin pilot wells. Oil engineering concepts such as submergence, pump off, Basic Sediments and Water (BSW), Inflow Performance Relationship (IPR), reservo ir pressure, inflow pressure, among others, were included into the algorithm through a mathematical treatment developed from a typical well and then extended to the general cases. The optimization will increase the well production potential maximum utilization having the smallest number of pumping unit cycles directly reflecting on operational cost and electricity consumption reduction
Resumo:
This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Wavelet functions have been used as the activation function in feedforward neural networks. An abundance of R&D has been produced on wavelet neural network area. Some successful algorithms and applications in wavelet neural network have been developed and reported in the literature. However, most of the aforementioned reports impose many restrictions in the classical backpropagation algorithm, such as low dimensionality, tensor product of wavelets, parameters initialization, and, in general, the output is one dimensional, etc. In order to remove some of these restrictions, a family of polynomial wavelets generated from powers of sigmoid functions is presented. We described how a multidimensional wavelet neural networks based on these functions can be constructed, trained and applied in pattern recognition tasks. As an example of application for the method proposed, it is studied the exclusive-or (XOR) problem.
Resumo:
In this paper we use the Hermite-Biehler theorem to establish results for the design of fixed order controllers for a class of time delay systems. We extend results of the polynomial case to quasipolynomials using the property of interlacing in high frequencies of the class of time delay systems considered. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Optimised placement of control and protective devices in distribution networks allows for a better operation and improvement of the reliability indices of the system. Control devices (used to reconfigure the feeders) are placed in distribution networks to obtain an optimal operation strategy to facilitate power supply restoration in the case of a contingency. Protective devices (used to isolate faults) are placed in distribution systems to improve the reliability and continuity of the power supply, significantly reducing the impacts that a fault can have in terms of customer outages, and the time needed for fault location and system restoration. This paper presents a novel technique to optimally place both control and protective devices in the same optimisation process on radial distribution feeders. The problem is modelled through mixed integer non-linear programming (MINLP) with real and binary variables. The reactive tabu search algorithm (RTS) is proposed to solve this problem. Results and optimised strategies for placing control and protective devices considering a practical feeder are presented. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Energy policies and technological progress in the development of wind turbines have made wind power the fastest growing renewable power source worldwide. The inherent variability of this resource requires special attention when analyzing the impacts of high penetration on the distribution network. A time-series steady-state analysis is proposed that assesses technical issues such as energy export, losses, and short-circuit levels. A multiobjective programming approach based on the nondominated sorting genetic algorithm (NSGA) is applied in order to find configurations that maximize the integration of distributed wind power generation (DWPG) while satisfying voltage and thermal limits. The approach has been applied to a medium voltage distribution network considering hourly demand and wind profiles for part of the U.K. The Pareto optimal solutions obtained highlight the drawbacks of using a single demand and generation scenario, and indicate the importance of appropriate substation voltage settings for maximizing the connection of MPG.
Resumo:
The separation methods are reduced applications as a result of the operational costs, the low output and the long time to separate the uids. But, these treatment methods are important because of the need for extraction of unwanted contaminants in the oil production. The water and the concentration of oil in water should be minimal (around 40 to 20 ppm) in order to take it to the sea. Because of the need of primary treatment, the objective of this project is to study and implement algorithms for identification of polynomial NARX (Nonlinear Auto-Regressive with Exogenous Input) models in closed loop, implement a structural identification, and compare strategies using PI control and updated on-line NARX predictive models on a combination of three-phase separator in series with three hydro cyclones batteries. The main goal of this project is to: obtain an optimized process of phase separation that will regulate the system, even in the presence of oil gushes; Show that it is possible to get optimized tunings for controllers analyzing the mesh as a whole, and evaluate and compare the strategies of PI and predictive control applied to the process. To accomplish these goals a simulator was used to represent the three phase separator and hydro cyclones. Algorithms were developed for system identification (NARX) using RLS(Recursive Least Square), along with methods for structure models detection. Predictive Control Algorithms were also implemented with NARX model updated on-line, and optimization algorithms using PSO (Particle Swarm Optimization). This project ends with a comparison of results obtained from the use of PI and predictive controllers (both with optimal state through the algorithm of cloud particles) in the simulated system. Thus, concluding that the performed optimizations make the system less sensitive to external perturbations and when optimized, the two controllers show similar results with the assessment of predictive control somewhat less sensitive to disturbances
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The multilayer perceptron network has become one of the most used in the solution of a wide variety of problems. The training process is based on the supervised method where the inputs are presented to the neural network and the output is compared with a desired value. However, the algorithm presents convergence problems when the desired output of the network has small slope in the discrete time samples or the output is a quasi-constant value. The proposal of this paper is presenting an alternative approach to solve this convergence problem with a pre-conditioning method of the desired output data set before the training process and a post-conditioning when the generalization results are obtained. Simulations results are presented in order to validate the proposed approach.
Resumo:
A branch and bound (B& B) algorithm using the DC model, to solve the power system transmission expansion planning by incorporating the electrical losses in network modelling problem is presented. This is a mixed integer nonlinear programming (MINLP) problem, and in this approach, the so-called fathoming tests in the B&B algorithm were redefined and a nonlinear programming (NLP) problem is solved in each node of the B& B tree, using an interior-point method. Pseudocosts were used to manage the development of the B&B tree and to decrease its size and the processing time. There is no guarantee of convergence towards global optimisation for the MINLP problem. However, preliminary tests show that the algorithm easily converges towards the best-known solutions or to the optimal solutions for all the tested systems neglecting the electrical losses. When the electrical losses are taken into account, the solution obtained using the Garver system is better than the best one known in the literature.
Resumo:
This paper analyses the impact of choosing good initial populations for genetic algorithms regarding convergence speed and final solution quality. Test problems were taken from complex electricity distribution network expansion planning. Constructive heuristic algorithms were used to generate good initial populations, particularly those used in resolving transmission network expansion planning. The results were compared to those found by a genetic algorithm with random initial populations. The results showed that an efficiently generated initial population led to better solutions being found in less time when applied to low complexity electricity distribution networks and better quality solutions for highly complex networks when compared to a genetic algorithm using random initial populations.
Resumo:
In this paper we describe a scheduler simulator for real-time tasks, RTsim, that can be used as a tool to teach real-time scheduling algorithms. It simulates a variety of preprogrammed scheduling policies for single and multi-processor systems and simple algorithm variants introduced by its user. Using RTsim students can conduct experiments that will allow them to understand the effects of each policy given different load conditions and learn which policy is better for different workloads. We show how to use RTsim as a learning tool and the results achieved with its application on the Real-Time Systems course taught at the B.Sc. on Computer Science at Paulista State University - Unesp - at Rio Preto.