177 resultados para Algoritmo evolucionário
Resumo:
The so-called Dual Mode Adaptive Robust Control (DMARC) is proposed. The DMARC is a control strategy which interpolates the Model Reference Adaptive Control (MRAC) and the Variable Structure Model Reference Adaptive Control (VS-MRAC). The main idea is to incorporate the transient performance advantages of the VS-MRAC controller with the smoothness control signal in steady-state of the MRAC controller. Two basic algorithms are developed for the DMARC controller. In the first algorithm the controller's adjustment is made, in real time, through the variation of a parameter in the adaptation law. In the second algorithm the control law is generated, using fuzzy logic with Takagi-Sugeno s model, to obtain a combination of the MRAC and VS-MRAC control laws. In both cases, the combined control structure is shown to be robust to the parametric uncertainties and external disturbances, with a fast transient performance, practically without oscillations, and a smoothness steady-state control signal
Resumo:
This work addresses issues related to analysis and development of multivariable predictive controllers based on bilinear multi-models. Linear Generalized Predictive Control (GPC) monovariable and multivariable is shown, and highlighted its properties, key features and applications in industry. Bilinear GPC, the basis for the development of this thesis, is presented by the time-step quasilinearization approach. Some results are presented using this controller in order to show its best performance when compared to linear GPC, since the bilinear models represent better the dynamics of certain processes. Time-step quasilinearization, due to the fact that it is an approximation, causes a prediction error, which limits the performance of this controller when prediction horizon increases. Due to its prediction error, Bilinear GPC with iterative compensation is shown in order to minimize this error, seeking a better performance than the classic Bilinear GPC. Results of iterative compensation algorithm are shown. The use of multi-model is discussed in this thesis, in order to correct the deficiency of controllers based on single model, when they are applied in cases with large operation ranges. Methods of measuring the distance between models, also called metrics, are the main contribution of this thesis. Several application results in simulated distillation columns, which are close enough to actual behaviour of them, are made, and the results have shown satisfactory
Sistema de detecção e isolamento de falhas em sistemas dinâmicos baseado em identificação paramétrica
Resumo:
The present research aims at contributing to the area of detection and diagnosis of failure through the proposal of a new system architecture of detection and isolation of failures (FDI, Fault Detection and Isolation). The proposed architecture presents innovations related to the way the physical values monitored are linked to the FDI system and, as a consequence, the way the failures are detected, isolated and classified. A search for mathematical tools able to satisfy the objectives of the proposed architecture has pointed at the use of the Kalman Filter and its derivatives EKF (Extended Kalman Filter) and UKF (Unscented Kalman Filter). The use of the first one is efficient when the monitored process presents a linear relation among its physical values to be monitored and its out-put. The other two are proficient in case this dynamics is no-linear. After that, a short comparative of features and abilities in the context of failure detection concludes that the UFK system is a better alternative than the EKF one to compose the architecture of the FDI system proposed in case of processes of no-linear dynamics. The results shown in the end of the research refer to the linear and no-linear industrial processes. The efficiency of the proposed architecture may be observed since it has been applied to simulated and real processes. To conclude, the contributions of this thesis are found in the end of the text
Resumo:
The number of applications based on embedded systems grows significantly every year, even with the fact that embedded systems have restrictions, and simple processing units, the performance of these has improved every day. However the complexity of applications also increase, a better performance will always be necessary. So even such advances, there are cases, which an embedded system with a single unit of processing is not sufficient to achieve the information processing in real time. To improve the performance of these systems, an implementation with parallel processing can be used in more complex applications that require high performance. The idea is to move beyond applications that already use embedded systems, exploring the use of a set of units processing working together to implement an intelligent algorithm. The number of existing works in the areas of parallel processing, systems intelligent and embedded systems is wide. However works that link these three areas to solve any problem are reduced. In this context, this work aimed to use tools available for FPGA architectures, to develop a platform with multiple processors to use in pattern classification with artificial neural networks
Resumo:
This thesis proposes the specification and performance analysis of a real-time communication mechanism for IEEE 802.11/11e standard. This approach is called Group Sequential Communication (GSC). The GSC has a better performance for dealing with small data packets when compared to the HCCA mechanism by adopting a decentralized medium access control using a publish/subscribe communication scheme. The main objective of the thesis is the HCCA overhead reduction of the Polling, ACK and QoS Null frames exchanged between the Hybrid Coordinator and the polled stations. The GSC eliminates the polling scheme used by HCCA scheduling algorithm by using a Virtual Token Passing procedure among members of the real-time group to whom a high-priority and sequential access to communication medium is granted. In order to improve the reliability of the mechanism proposed into a noisy channel, it is presented an error recovery scheme called second chance algorithm. This scheme is based on block acknowledgment strategy where there is a possibility of retransmitting when missing real-time messages. Thus, the GSC mechanism maintains the real-time traffic across many IEEE 802.11/11e devices, optimized bandwidth usage and minimal delay variation for data packets in the wireless network. For validation purpose of the communication scheme, the GSC and HCCA mechanisms have been implemented in network simulation software developed in C/C++ and their performance results were compared. The experiments show the efficiency of the GSC mechanism, especially in industrial communication scenarios.
Resumo:
We propose a new paradigm for collective learning in multi-agent systems (MAS) as a solution to the problem in which several agents acting over the same environment must learn how to perform tasks, simultaneously, based on feedbacks given by each one of the other agents. We introduce the proposed paradigm in the form of a reinforcement learning algorithm, nominating it as reinforcement learning with influence values. While learning by rewards, each agent evaluates the relation between the current state and/or action executed at this state (actual believe) together with the reward obtained after all agents that are interacting perform their actions. The reward is a result of the interference of others. The agent considers the opinions of all its colleagues in order to attempt to change the values of its states and/or actions. The idea is that the system, as a whole, must reach an equilibrium, where all agents get satisfied with the obtained results. This means that the values of the state/actions pairs match the reward obtained by each agent. This dynamical way of setting the values for states and/or actions makes this new reinforcement learning paradigm the first to include, naturally, the fact that the presence of other agents in the environment turns it a dynamical model. As a direct result, we implicitly include the internal state, the actions and the rewards obtained by all the other agents in the internal state of each agent. This makes our proposal the first complete solution to the conceptual problem that rises when applying reinforcement learning in multi-agent systems, which is caused by the difference existent between the environment and agent models. With basis on the proposed model, we create the IVQ-learning algorithm that is exhaustive tested in repetitive games with two, three and four agents and in stochastic games that need cooperation and in games that need collaboration. This algorithm shows to be a good option for obtaining solutions that guarantee convergence to the Nash optimum equilibrium in cooperative problems. Experiments performed clear shows that the proposed paradigm is theoretical and experimentally superior to the traditional approaches. Yet, with the creation of this new paradigm the set of reinforcement learning applications in MAS grows up. That is, besides the possibility of applying the algorithm in traditional learning problems in MAS, as for example coordination of tasks in multi-robot systems, it is possible to apply reinforcement learning in problems that are essentially collaborative
Resumo:
With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users
Resumo:
This work intends to analyze the behavior of the gas flow of plunger lift wells producing to well testing separators in offshore production platforms to aim a technical procedure to estimate the gas flow during the slug production period. The motivation for this work appeared from the expectation of some wells equipped with plunger lift method by PETROBRAS in Ubarana sea field located at Rio Grande do Norte State coast where the produced fluids measurement is made in well testing separators at the platform. The oil artificial lift method called plunger lift is used when the available energy of the reservoir is not high enough to overcome all the necessary load losses to lift the oil from the bottom of the well to the surface continuously. This method consists, basically, in one free piston acting as a mechanical interface between the formation gas and the produced liquids, greatly increasing the well s lifting efficiency. A pneumatic control valve is mounted at the flow line to control the cycles. When this valve opens, the plunger starts to move from the bottom to the surface of the well lifting all the oil and gas that are above it until to reach the well test separator where the fluids are measured. The well test separator is used to measure all the volumes produced by the well during a certain period of time called production test. In most cases, the separators are designed to measure stabilized flow, in other words, reasonably constant flow by the use of level and pressure electronic controllers (PLC) and by assumption of a steady pressure inside the separator. With plunger lift wells the liquid and gas flow at the surface are cyclical and unstable what causes the appearance of slugs inside the separator, mainly in the gas phase, because introduce significant errors in the measurement system (e.g.: overrange error). The flow gas analysis proposed in this work is based on two mathematical models used together: i) a plunger lift well model proposed by Baruzzi [1] with later modifications made by Bolonhini [2] to built a plunger lift simulator; ii) a two-phase separator model (gas + liquid) based from a three-phase separator model (gas + oil + water) proposed by Nunes [3]. Based on the models above and with field data collected from the well test separator of PUB-02 platform (Ubarana sea field) it was possible to demonstrate that the output gas flow of the separator can be estimate, with a reasonable precision, from the control signal of the Pressure Control Valve (PCV). Several models of the System Identification Toolbox from MATLAB® were analyzed to evaluate which one better fit to the data collected from the field. For validation of the models, it was used the AIC criterion, as well as a variant of the cross validation criterion. The ARX model performance was the best one to fit to the data and, this way, we decided to evaluate a recursive algorithm (RARX) also with real time data. The results were quite promising that indicating the viability to estimate the output gas flow rate from a plunger lift well producing to a well test separator, with the built-in information of the control signal to the PCV
Resumo:
In this work, the variable structure adaptive pole placement controller (VS-APPC) robustness and performance are evaluated and this algorithm is applied in a motor control system. The controller robustness evaluation will be done through simulations, where will be introduced in the system the following adversities: time delay, actuator response boundeds, disturbances, parametric variation and unmodeled dynamics. The VS-APPC will be compared with PI control, pole placement control (PPC) and adaptive pole placement controller (APPC). The VS-APPC will be simulated to track a step and a sine reference. It will be applied in a three-phase induction motor control system to track a sine signal in the stator reference frame. Simulation and experimental results will prove the efficiency and robustness of this control strategy
Resumo:
This study developed software rotines, in a system made basically from a processor board producer of signs and supervisory, wich main function was correcting the information measured by a turbine gas meter. This correction is based on the use of an intelligent algorithm formed by an artificial neural net. The rotines were implemented in the habitat of the supervisory as well as in the habitat of the DSP and have three main itens: processing, communication and supervision
Resumo:
In the recovering process of oil, rock heterogeneity has a huge impact on how fluids move in the field, defining how much oil can be recovered. In order to study this variability, percolation theory, which describes phenomena involving geometry and connectivity are the bases, is a very useful model. Result of percolation is tridimensional data and have no physical meaning until visualized in form of images or animations. Although a lot of powerful and sophisticated visualization tools have been developed, they focus on generation of planar 2D images. In order to interpret data as they would be in the real world, virtual reality techniques using stereo images could be used. In this work we propose an interactive and helpful tool, named ZSweepVR, based on virtual reality techniques that allows a better comprehension of volumetric data generated by simulation of dynamic percolation. The developed system has the ability to render images using two different techniques: surface rendering and volume rendering. Surface rendering is accomplished by OpenGL directives and volume rendering is accomplished by the Zsweep direct volume rendering engine. In the case of volumetric rendering, we implemented an algorithm to generate stereo images. We also propose enhancements in the original percolation algorithm in order to get a better performance. We applied our developed tools to a mature field database, obtaining satisfactory results. The use of stereoscopic and volumetric images brought valuable contributions for the interpretation and clustering formation analysis in percolation, what certainly could lead to better decisions about the exploration and recovery process in oil fields
Resumo:
We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness.
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
Este trabalho propõe um ambiente computacional aplicado ao ensino de sistemas de controle, denominado de ModSym. O software implementa uma interface gráfica para a modelagem de sistemas físicos lineares e mostra, passo a passo, o processamento necessário à obtenção de modelos matemáticos para esses sistemas. Um sistema físico pode ser representado, no software, de três formas diferentes. O sistema pode ser representado por um diagrama gráfico a partir de elementos dos domínios elétrico, mecânico translacional, mecânico rotacional e hidráulico. Pode também ser representado a partir de grafos de ligação ou de diagramas de fluxo de sinal. Uma vez representado o sistema, o ModSym possibilita o cálculo de funções de transferência do sistema na forma simbólica, utilizando a regra de Mason. O software calcula também funções de transferência na forma numérica e funções de sensibilidade paramétrica. O trabalho propõe ainda um algoritmo para obter o diagrama de fluxo de sinal de um sistema físico baseado no seu grafo de ligação. Este algoritmo e a metodologia de análise de sistemas conhecida por Network Method permitiram a utilização da regra de Mason no cálculo de funções de transferência dos sistemas modelados no software
Resumo:
The predictive control technique has gotten, on the last years, greater number of adepts in reason of the easiness of adjustment of its parameters, of the exceeding of its concepts for multi-input/multi-output (MIMO) systems, of nonlinear models of processes could be linearised around a operating point, so can clearly be used in the controller, and mainly, as being the only methodology that can take into consideration, during the project of the controller, the limitations of the control signals and output of the process. The time varying weighting generalized predictive control (TGPC), studied in this work, is one more an alternative to the several existing predictive controls, characterizing itself as an modification of the generalized predictive control (GPC), where it is used a reference model, calculated in accordance with parameters of project previously established by the designer, and the application of a new function criterion, that when minimized offers the best parameters to the controller. It is used technique of the genetic algorithms to minimize of the function criterion proposed and searches to demonstrate the robustness of the TGPC through the application of performance, stability and robustness criterions. To compare achieves results of the TGPC controller, the GCP and proportional, integral and derivative (PID) controllers are used, where whole the techniques applied to stable, unstable and of non-minimum phase plants. The simulated examples become fulfilled with the use of MATLAB tool. It is verified that, the alterations implemented in TGPC, allow the evidence of the efficiency of this algorithm