957 resultados para Markov,Processos de
Resumo:
Technological innovation promotes the generation of economic value by creating a new product, process or organizational management model, being classified as dynamic and multidimensional. Government intervention has the role of acting through government grant programs to foster the integration of innovative processes in small companies, due to the high costs and risks of development, strengthening the country`s economy in this phase. The distribution of this grant is determined by criteria, based especially in subjective judgments, which are based on the beliefs and perceptions about the technological opportunities and market actors involved in the process, being very difficult to measure the probability of success of the project under evaluation. This study aims to identify the most relevant selection criteria that must be inserted in grants programs at Rio Grande do Norte executed by Fundação de Pesquisa do Rio Grande do Norte (FAPERN). Initially, there was a systematization of 18 countries, covering 41 programs in foreign countries and 29 in Brazil. Based on the data collected, we conducted one survey containing four programs of FAPERN (INOVA I, INOVA II , INOVA III and INOVA IV), covering 44 companies and analyzing their responses according to the Likert scale , obtaining the degree of importance given by the respondent to each of the criteria in the questionnaire . As a result, drew up a proposal for new criteria to be used in the next FAPERN´s grants, containing 13 new criteria. It is expected, therefore, to contribute to a better spending of public funds invested in companies subsidized in Brazil
Resumo:
The need to implement a software architecture that promotes the development of a SCADA supervisory system for monitoring industrial processes simulated with the flexibility of adding intelligent modules and devices such as CLP, according to the specifications of the problem, it was the motivation for this work. In the present study, we developed an intelligent supervisory system on a simulation of a distillation column modeled with Unisim. Furthermore, OLE Automation was used as communication between the supervisory and simulation software, which, with the use of the database, promoted an architecture both scalable and easy to maintain. Moreover, intelligent modules have been developed for preprocessing, data characteristics extraction, and variables inference. These modules were fundamentally based on the Encog software
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables
Resumo:
During a petroleum well production process, It is common the slmultaneous oil and water production, in proportion that can vary from 0% up to values close to 100% of water. Moreover, the production flows can vary a lot, depending on the charaeteristies of eaeh reservoir. Thus being, the meters used in field for the flow and BSW (water in the oil) measurement must work well in wide bands of operation. For the evaluation of the operation of these meters, in the different operation conditions, a Laboratory will be built in UFRN, that has for objective to evaluate in an automatic way the processes of flow and BSW petroleum measurement, considering different operation conditions. The good acting of these meters is fundamental for the accuracy of the measures of the volumes of production liquid and rude of petroleum. For the measurement of this production, the petroleum companies use meters that should indicate the values with tha largast possible accuracy and to respect a series of conditions and minimum requirements, estabelished by the united Entrance ANP/INMETRO 19106/2000. The laboratory of Evafuation of the Processes of Measurement of Flow and BSW to be built will possess an oil tank basically, a tank of water, besides a mixer, a tank auditor, a tank for separation and a tank of residues for discard of fluids, fundamental for the evaluation of the flow metars and BSW. The whole process will be automated through the use of a Programmable Logicat Controller (CLP) and of a supervisory system.This laboratory besides allowing the evaluation of flow meters and BSW used by petroleum companies, it will make possible the development of researches related to the automation. Besides, it will be a collaborating element to the development of the Computer Engineering and Automation Department, that it will propitiate the evolution of the faculty and discente, qualifying them for a job market in continuous growth. The present work describes the project of automation of the laboratory that will be built at of UFRN. The system will be automated using a Programmable Logical Controller and a supervisory system. The programming of PLC and the screens of the supervisory system were developed in this work
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
A new method to perform TCP/IP fingerprinting is proposed. TCP/IP fingerprinting is the process of identify a remote machine through a TCP/IP based computer network. This method has many applications related to network security. Both intrusion and defence procedures may use this process to achieve their objectives. There are many known methods that perform this process in favorable conditions. However, nowadays there are many adversities that reduce the identification performance. This work aims the creation of a new OS fingerprinting tool that bypass these actual problems. The proposed method is based on the use of attractors reconstruction and neural networks to characterize and classify pseudo-random numbers generators
Resumo:
This study aims to use a computational model that considers the statistical characteristics of the wind and the reliability characteristics of a wind turbine, such as failure rates and repair, representing the wind farm by a Markov process to determine the estimated annual energy generated, and compare it with a real case. This model can also be used in reliability studies, and provides some performance indicators that will help in analyzing the feasibility of setting up a wind farm, once the power curve is known and the availability of wind speed measurements. To validate this model, simulations were done using the database of the wind farm of Macau PETROBRAS. The results were very close to the real, thereby confirming that the model successfully reproduced the behavior of all components involved. Finally, a comparison was made of the results presented by this model, with the result of estimated annual energy considering the modeling of the distribution wind by a statistical distribution of Weibull
Resumo:
The evolution of automation in recent years made possible the continuous monitoring of the processes of industrial plants. With this advance, the amount of information that automation systems are subjected to increased significantly. The alarms generated by the monitoring equipment are a major contributor to this increase, and the equipments are usually deployed in industrial plants without a formal methodology, which entails an increase in the number of alarms generated, thus overloading the alarm system and therefore the operators of such plants. In this context, the works of alarm management comes up with the objective of defining a formal methodology for installation of new equipment and detect problems in existing settings. This thesis aims to propose a set of metrics for the evaluation of alarm systems already deployed, so that you can identify the health of this system by analyzing the proposed indices and comparing them with parameters defined in the technical norms of alarm management. In addition, the metrics will track the work of alarm management, verifying if it is improving the quality of the alarm system. To validate the proposed metrics, data from actual process plants of the petrochemical industry were used
Resumo:
The structure of Industrial Automation bases on a hierarchical pyramid, where restricted information islands are created. Those information islands are characterized by systems where hardware and software used are proprietors. In other words, they are supplied for just a manufacturer, doing with that customer is entailed to that supplier. That solution causes great damages to companies. Once the connection and integration with other equipments, that are not of own supplier, it is very complicated. Several times it is impossible of being accomplished, because of high cost of solution or for technical incompatibility. This work consists to specify and to implement the visualization module via Web of GERINF. GERINF is a FINEP/CTPetro project that has the objective of developing a software for information management in industrial processes. GERINF is divided in three modules: visualization via Web, compress and storage and communication module. Are presented results of the utilization of a proposed system to information management of a Natural Gas collected Unit of Guamar´e on the PETROBRAS UN-RNCE.
Resumo:
The using of supervision systems has become more and more essential in accessing, managing and obtaining data of industrial processes, because of constant and frequent developments in industrial automation. These supervisory systems (SCADA) have been widely used in many industrial environments to store process data and to control the processes in accordance with some adopted strategy. The SCADA s control hardware is the set of equipments that execute this work. The SCADA s supervision software accesses process data through the control hardware and shows them to the users. Currently, many industrial systems adopt supervision softwares developed by the same manufacturer of the control hardware. Usually, these softwares cannot be used with other equipments made by distinct manufacturers. This work proposes an approach for developing supervisory systems able to access process information through different control hardwares. An architecture for supervisory systems is first defined, in order to guarantee efficiency in communication and data exchange. Then, the architecture is applied in a supervisory system to monitor oil wells that use distinct control hardwares. The implementation was modeled and verified by using the formal method of the Petri networks. Finally, experimental results are presented to demonstrate the applicability of the proposed solution
Resumo:
Operating industrial processes is becoming more complex each day, and one of the factors that contribute to this growth in complexity is the integration of new technologies and smart solutions employed in the industry, such as the decision support systems. In this regard, this dissertation aims to develop a decision support system based on an computational tool called expert system. The main goal is to turn operation more reliable and secure while maximizing the amount of relevant information to each situation by using an expert system based on rules designed for a particular area of expertise. For the modeling of such rules has been proposed a high-level environment, which allows the creation and manipulation of rules in an easier way through visual programming. Despite its wide range of possible applications, this dissertation focuses only in the context of real-time filtering of alarms during the operation, properly validated in a case study based on a real scenario occurred in an industrial plant of an oil and gas refinery
Resumo:
Slugging is a well-known slugging phenomenon in multiphase flow, which may cause problems such as vibration in pipeline and high liquid level in the separator. It can be classified according to the place of its occurrence. The most severe, known as slugging in the riser, occurs in the vertical pipe which feeds the platform. Also known as severe slugging, it is capable of causing severe pressure fluctuations in the flow of the process, excessive vibration, flooding in separator tanks, limited production, nonscheduled stop of production, among other negative aspects that motivated the production of this work . A feasible solution to deal with this problem would be to design an effective method for the removal or reduction of the system, a controller. According to the literature, a conventional PID controller did not produce good results due to the high degree of nonlinearity of the process, fueling the development of advanced control techniques. Among these, the model predictive controller (MPC), where the control action results from the solution of an optimization problem, it is robust, can incorporate physical and /or security constraints. The objective of this work is to apply a non-conventional non-linear model predictive control technique to severe slugging, where the amount of liquid mass in the riser is controlled by the production valve and, indirectly, the oscillation of flow and pressure is suppressed, while looking for environmental and economic benefits. The proposed strategy is based on the use of the model linear approximations and repeatedly solving of a quadratic optimization problem, providing solutions that improve at each iteration. In the event where the convergence of this algorithm is satisfied, the predicted values of the process variables are the same as to those obtained by the original nonlinear model, ensuring that the constraints are satisfied for them along the prediction horizon. A mathematical model recently published in the literature, capable of representing characteristics of severe slugging in a real oil well, is used both for simulation and for the project of the proposed controller, whose performance is compared to a linear MPC