26 resultados para Plataformas de programação
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
In the operational context of industrial processes, alarm, by definition, is a warning to the operator that an action with limited time to run is required, while the event is a change of state information, which does not require action by the operator, therefore should not be advertised, and only stored for analysis of maintenance, incidents and used for signaling / monitoring (EEMUA, 2007). However, alarms and events are often confused and improperly configured similarly by developers of automation systems. This practice results in a high amount of pseudo-alarms during the operation of industrial processes. The high number of alarms is a major obstacle to improving operational efficiency, making it difficult to identify problems and increasing the time to respond to abnormalities. The main consequences of this scenario are the increased risk to personal safety, facilities, environment deterioration and loss of production. The aim of this paper is to present a philosophy for setting up a system of supervision and control, developed with the aim of reducing the amount of pseudo-alarms and increase reliability of the information that the system provides. A real case study was conducted in the automation system of the offshore production of hydrocarbons from Petrobras in Rio Grande do Norte, in order to validate the application of this new methodology. The work followed the premises of the tool presented in ISA SP18.2. 2009, called "life cycle alarm . After the implementation of methodology there was a significant reduction in the number of alarms
Resumo:
Due of industrial informatics several attempts have been done to develop notations and semantics, which are used for classifying and describing different kind of system behavior, particularly in the modeling phase. Such attempts provide the infrastructure to resolve some real problems of engineering and construct practical systems that aim at, mainly, to increase the productivity, quality, and security of the process. Despite the many studies that have attempted to develop friendly methods for industrial controller programming, they are still programmed by conventional trial-and-error methods and, in practice, there is little written documentation on these systems. The ideal solution would be to use a computational environment that allows industrial engineers to implement the system using high-level language and that follows international standards. Accordingly, this work proposes a methodology for plant and control modelling of the discrete event systems that include sequential, parallel and timed operations, using a formalism based on Statecharts, denominated Basic Statechart (BSC). The methodology also permits automatic procedures to validate and implement these systems. To validate our methodology, we presented two case studies with typical examples of the manufacturing sector. The first example shows a sequential control for a tagged machine, which is used to illustrated dependences between the devices of the plant. In the second example, we discuss more than one strategy for controlling a manufacturing cell. The model with no control has 72 states (distinct configurations) and, the model with sequential control generated 20 different states, but they only act in 8 distinct configurations. The model with parallel control generated 210 different states, but these 210 configurations act only in 26 distinct configurations, therefore, one strategy control less restrictive than previous. Lastly, we presented one example for highlight the modular characteristic of our methodology, which it is very important to maintenance of applications. In this example, the sensors for identifying pieces in the plant were removed. So, changes in the control model are needed to transmit the information of the input buffer sensor to the others positions of the cell
Resumo:
This dissertation describes the use of new Technologies of the Areas of Telecommunications, Networks and Industrial Automation for increase of the Operational Safety and obtaining of Operational Improvements in the Platforms Petroliferous Offshore. The presented solution represents the junction of several modules of these areas, making possible the Supervision and Contrai of the Platforms Petroliferous Offshore starting from an Station Onshore, in way similar to a remote contral, by virtue of the visualization possibility and audition of the operational area through cameras and microphones, looking the operator of the system to be "present" in the platform. This way, it diminishes the embarked people's need, increasing the Operational Safety. As consequence, we have the obtaining of Operational Improvements, by virtue of the use of a digital link of large band it releases multi-service. In this link traffic simultaneously digital signs of data (Ethernet Network), telephony (Phone VoIP), image and sound
Resumo:
This work intends to analyze the behavior of the gas flow of plunger lift wells producing to well testing separators in offshore production platforms to aim a technical procedure to estimate the gas flow during the slug production period. The motivation for this work appeared from the expectation of some wells equipped with plunger lift method by PETROBRAS in Ubarana sea field located at Rio Grande do Norte State coast where the produced fluids measurement is made in well testing separators at the platform. The oil artificial lift method called plunger lift is used when the available energy of the reservoir is not high enough to overcome all the necessary load losses to lift the oil from the bottom of the well to the surface continuously. This method consists, basically, in one free piston acting as a mechanical interface between the formation gas and the produced liquids, greatly increasing the well s lifting efficiency. A pneumatic control valve is mounted at the flow line to control the cycles. When this valve opens, the plunger starts to move from the bottom to the surface of the well lifting all the oil and gas that are above it until to reach the well test separator where the fluids are measured. The well test separator is used to measure all the volumes produced by the well during a certain period of time called production test. In most cases, the separators are designed to measure stabilized flow, in other words, reasonably constant flow by the use of level and pressure electronic controllers (PLC) and by assumption of a steady pressure inside the separator. With plunger lift wells the liquid and gas flow at the surface are cyclical and unstable what causes the appearance of slugs inside the separator, mainly in the gas phase, because introduce significant errors in the measurement system (e.g.: overrange error). The flow gas analysis proposed in this work is based on two mathematical models used together: i) a plunger lift well model proposed by Baruzzi [1] with later modifications made by Bolonhini [2] to built a plunger lift simulator; ii) a two-phase separator model (gas + liquid) based from a three-phase separator model (gas + oil + water) proposed by Nunes [3]. Based on the models above and with field data collected from the well test separator of PUB-02 platform (Ubarana sea field) it was possible to demonstrate that the output gas flow of the separator can be estimate, with a reasonable precision, from the control signal of the Pressure Control Valve (PCV). Several models of the System Identification Toolbox from MATLAB® were analyzed to evaluate which one better fit to the data collected from the field. For validation of the models, it was used the AIC criterion, as well as a variant of the cross validation criterion. The ARX model performance was the best one to fit to the data and, this way, we decided to evaluate a recursive algorithm (RARX) also with real time data. The results were quite promising that indicating the viability to estimate the output gas flow rate from a plunger lift well producing to a well test separator, with the built-in information of the control signal to the PCV
Resumo:
In the two last decades of the past century, following the consolidation of the Internet as the world-wide computer network, applications generating more robust data flows started to appear. The increasing use of videoconferencing stimulated the creation of a new form of point-to-multipoint transmission called IP Multicast. All companies working in the area of software and the hardware development for network videoconferencing have adjusted their products as well as developed new solutionsfor the use of multicast. However the configuration of such different solutions is not easy done, moreover when changes in the operational system are also requirede. Besides, the existing free tools have limited functions, and the current comercial solutions are heavily dependent on specific platforms. Along with the maturity of IP Multicast technology and with its inclusion in all the current operational systems, the object-oriented programming languages had developed classes able to handle multicast traflic. So, with the help of Java APIs for network, data bases and hipertext, it became possible to the develop an Integrated Environment able to handle multicast traffic, which is the major objective of this work. This document describes the implementation of the above mentioned environment, which provides many functions to use and manage multicast traffic, functions which existed only in a limited way and just in few tools, normally the comercial ones. This environment is useful to different kinds of users, so that it can be used by common users, who want to join multimedia Internet sessions, as well as more advenced users such engineers and network administrators who may need to monitor and handle multicast traffic
Resumo:
A challenge that remains in the robotics field is how to make a robot to react in real time to visual stimulus. Traditional computer vision algorithms used to overcome this problem are still very expensive taking too long when using common computer processors. Very simple algorithms like image filtering or even mathematical morphology operations may take too long. Researchers have implemented image processing algorithms in high parallelism hardware devices in order to cut down the time spent in the algorithms processing, with good results. By using hardware implemented image processing techniques and a platform oriented system that uses the Nios II Processor we propose an approach that uses the hardware processing and event based programming to simplify the vision based systems while at the same time accelerating some parts of the used algorithms
Resumo:
The control of industrial processes has become increasingly complex due to variety of factory devices, quality requirement and market competition. Such complexity requires a large amount of data to be treated by the three levels of process control: field devices, control systems and management softwares. To use data effectively in each one of these levels is extremely important to industry. Many of today s industrial computer systems consist of distributed software systems written in a wide variety of programming languages and developed for specific platforms, so, even more companies apply a significant investment to maintain or even re-write their systems for different platforms. Furthermore, it is rare that a software system works in complete isolation. In industrial automation is common that, software had to interact with other systems on different machines and even written in different languages. Thus, interoperability is not just a long-term challenge, but also a current context requirement of industrial software production. This work aims to propose a middleware solution for communication over web service and presents an user case applying the solution developed to an integrated system for industrial data capture , allowing such data to be available simplified and platformindependent across the network
Resumo:
The production of oil and gas is usually accompanied by the production of water, also known as produced water. Studies were conducted in platforms that discharge produced water in the Atlantic Ocean due to oil and gas production by Petrobras from 1996 to 2006 in the following basins: Santos (Brazilian south region), Campos (Brazilian southeast region) and Ceara (Brazilian northeast region). This study encompasses chemical composition, toxicological effects, discharge volumes, and produced water behavior after releasing in the ocean, including dispersion plumes modeling and monitoring data of the marine environment. The concentration medians for a sampling of 50 samples were: ammonia (70 mg L-1), boron (1.3 mg L1), iron (7.4 mg L-1), BTEX (4.6 mg L-1), PAH (0.53 mg L-1), TPH (28 mg L-1); phenols (1.3 mg L-1) and radioisotopes (0.15 Bq L-1 for 226Ra and 0.09 Bq L-1 for 228Ra). The concentrations of the organic and inorganic parameters observed for the Brazilian platforms were similar to the international reference data for the produced water in the North Sea and in other regions of the world. It was found significant differences in concentrations of the following parameters: BTEX (p<0.0001), phenols (p=0.0212), boron (p<0.0001), iron (p<0.0001) and toxicological response in sea urchin Lytechinus variegatus (p<0.0001) when considering two distinguished groups, platforms from southeast and northeast Region (PCR-1). Significant differences were not observed among the other parameters. In platforms with large gas production, the monoaromatic concentrations (BTEX from 15.8 to 21.6 mg L-1) and phenols (from 2 to 83 mg L-1) were higher than in oil plataforms (median concentrations of BTEX were 4.6 mg L-1 for n=53, and of phenols were 1.3 mg L-1 for n=46). It was also conducted a study about the influence of dispersion plumes of produced water in the vicinity of six platforms of oil and gas production (P-26, PPG-1, PCR-1, P-32, SS-06), and in a hypothetical critical scenario using the chemical characteristics of each effluent. Through this study, using CORMIX and CHEMMAP models for dispersion plumes simulation of the produced water discharges, it was possible to obtain the dilution dimension in the ocean after those discharges. The dispersion plumes of the produced water modelling in field vicinity showed dilutions of 700 to 900 times for the first 30-40 meters from the platform PCR-1 discharge point; 100 times for the platform P-32, with 30 meters of distance; 150 times for the platform P-26, with 40 meters of distance; 100 times for the platform PPG-1, with 130 meters of distance; 280 to 350 times for the platform SS-06, with 130 meters of distance, 100 times for the hypothetical critical scenario, with the 130 meters of distance. The dilutions continue in the far field, and with the results of the simulations, it was possible to verify that all the parameters presented concentrations bellow the maximum values established by Brazilian legislation for seawater (CONAMA 357/05 - Class 1), before the 500 meters distance of the discharge point. These results were in agreement with the field measurements. Although, in general results for the Brazilian produced water presented toxicological effects for marine organisms, it was verified that dilutions of 100 times were sufficient for not causing toxicological responses. Field monitoring data of the seawater around the Pargo, Pampo and PCR-1 platforms did not demonstrate toxicity in the seawater close to these platforms. The results of environmental monitoring in seawater and sediments proved that alterations were not detected for environmental quality in areas under direct influence of the oil production activities in the Campos and Ceara Basin, as according to results obtained in the dispersion plume modelling for the produced water discharge
Resumo:
The production of oil and gas is usually accompanied by the production of water, also known as produced water. Studies were conducted in platforms that discharge produced water in the Atlantic Ocean due to oil and gas production by Petrobras from 1996 to 2006 in the following basins: Santos (Brazilian south region), Campos (Brazilian southeast region) and Ceara (Brazilian northeast region). This study encompasses chemical composition, toxicological effects, discharge volumes, and produced water behavior after releasing in the ocean, including dispersion plumes modeling and monitoring data of the marine environment. The concentration medians for a sampling of 50 samples were: ammonia (70 mg L-1), boron (1.3 mg L1), iron (7.4 mg L-1), BTEX (4.6 mg L-1), PAH (0.53 mg L-1), TPH (28 mg L-1); phenols (1.3 mg L-1) and radioisotopes (0.15 Bq L-1 for 226Ra and 0.09 Bq L-1 for 228Ra). The concentrations of the organic and inorganic parameters observed for the Brazilian platforms were similar to the international reference data for the produced water in the North Sea and in other regions of the world. It was found significant differences in concentrations of the following parameters: BTEX (p<0.0001), phenols (p=0.0212), boron (p<0.0001), iron (p<0.0001) and toxicological response in sea urchin Lytechinus variegatus (p<0.0001) when considering two distinguished groups, platforms from southeast and northeast Region (PCR-1). Significant differences were not observed among the other parameters. In platforms with large gas production, the monoaromatic concentrations (BTEX from 15.8 to 21.6 mg L-1) and phenols (from 2 to 83 mg L-1) were higher than in oil plataforms (median concentrations of BTEX were 4.6 mg L-1 for n=53, and of phenols were 1.3 mg L-1 for n=46). It was also conducted a study about the influence of dispersion plumes of produced water in the vicinity of six platforms of oil and gas production (P-26, PPG-1, PCR-1, P-32, SS-06), and in a hypothetical critical scenario using the chemical characteristics of each effluent. Through this study, using CORMIX and CHEMMAP models for dispersion plumes simulation of the produced water discharges, it was possible to obtain the dilution dimension in the ocean after those discharges. The dispersion plumes of the produced water modelling in field vicinity showed dilutions of 700 to 900 times for the first 30-40 meters from the platform PCR-1 discharge point; 100 times for the platform P-32, with 30 meters of distance; 150 times for the platform P-26, with 40 meters of distance; 100 times for the platform PPG-1, with 130 meters of distance; 280 to 350 times for the platform SS-06, with 130 meters of distance, 100 times for the hypothetical critical scenario, with the 130 meters of distance. The dilutions continue in the far field, and with the results of the simulations, it was possible to verify that all the parameters presented concentrations bellow the maximum values established by Brazilian legislation for seawater (CONAMA 357/05 - Class 1), before the 500 meters distance of the discharge point. These results were in agreement with the field measurements. Although, in general results for the Brazilian produced water presented toxicological effects for marine organisms, it was verified that dilutions of 100 times were sufficient for not causing toxicological responses. Field monitoring data of the seawater around the Pargo, Pampo and PCR-1 platforms did not demonstrate toxicity in the seawater close to these platforms. The results of environmental monitoring in seawater and sediments proved that alterations were not detected for environmental quality in areas under direct influence of the oil production activities in the Campos and Ceara Basin, as according to results obtained in the dispersion plume modelling for the produced water discharge
Resumo:
Context-aware applications are typically dynamic and use services provided by several sources, with different quality levels. Context information qualities are expressed in terms of Quality of Context (QoC) metadata, such as precision, correctness, refreshment, and resolution. On the other hand, service qualities are expressed via Quality of Services (QoS) metadata such as response time, availability and error rate. In order to assure that an application is using services and context information that meet its requirements, it is essential to continuously monitor the metadata. For this purpose, it is needed a QoS and QoC monitoring mechanism that meet the following requirements: (i) to support measurement and monitoring of QoS and QoC metadata; (ii) to support synchronous and asynchronous operation, thus enabling the application to periodically gather the monitored metadata and also to be asynchronously notified whenever a given metadata becomes available; (iii) to use ontologies to represent information in order to avoid ambiguous interpretation. This work presents QoMonitor, a module for QoS and QoC metadata monitoring that meets the abovementioned requirement. The architecture and implementation of QoMonitor are discussed. To support asynchronous communication QoMonitor uses two protocols: JMS and Light-PubSubHubbub. In order to illustrate QoMonitor in the development of ubiquitous application it was integrated to OpenCOPI (Open COntext Platform Integration), a Middleware platform that integrates several context provision middleware. To validate QoMonitor we used two applications as proofof- concept: an oil and gas monitoring application and a healthcare application. This work also presents a validation of QoMonitor in terms of performance both in synchronous and asynchronous requests
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
Middleware platforms have been widely used as an underlying infrastructure to the development of distributed applications. They provide distribution and heterogeneity transparency and a set of services that ease the construction of distributed applications. Nowadays, the middlewares accommodate an increasing variety of requirements to satisfy distinct application domains. This broad range of application requirements increases the complexity of the middleware, due to the introduction of many cross-cutting concerns in the architecture, which are not properly modularized by traditional programming techniques, resulting in a tangling and spread of theses concerns in the middleware code. The presence of these cross-cutting concerns limits the middleware scalability and aspect-oriented paradigm has been used successfully to improve the modularity, extensibility and customization capabilities of middleware. This work presents AO-OiL, an aspect-oriented (AO) middleware architecture, based on the AO middleware reference architecture. This middleware follows the philosophy that the middleware functionalities must be driven by the application requirements. AO-OiL consists in an AO refactoring of the OiL (Orb in Lua) middleware in order to separate basic and crosscutting concerns. The proposed architecture was implemented in Lua and RE-AspectLua. To evaluate the refactoring impact in the middleware architecture, this paper presents a comparative analysis of performance between AO-OiL and OiL
Resumo:
Research on Wireless Sensor Networks (WSN) has evolved, with potential applications in several domains. However, the building of WSN applications is hampered by the need of programming in low-level abstractions provided by sensor OS and of specific knowledge about each application domain and each sensor platform. We propose a MDA approach do develop WSN applications. This approach allows domain experts to directly contribute in the developing of applications without needing low level knowledge on WSN platforms and, at the same time, it allows network experts to program WSN nodes to met application requirements without specific knowledge on the application domain. Our approach also promotes the reuse of the developed software artifacts, allowing an application model to be reused across different sensor platforms and a platform model to be reused for different applications
Resumo:
This work approaches the Scheduling Workover Rigs Problem (SWRP) to maintain the wells of an oil field, although difficult to resolve, is extremely important economical, technical and environmental. A mathematical formulation of this problem is presented, where an algorithmic approach was developed. The problem can be considered to find the best scheduling service to the wells by the workover rigs, taking into account the minimization of the composition related to the costs of the workover rigs and the total loss of oil suffered by the wells. This problem is similar to the Vehicle Routing Problem (VRP), which is classified as belonging to the NP-hard class. The goal of this research is to develop an algorithmic approach to solve the SWRP, using the fundamentals of metaheuristics like Memetic Algorithm and GRASP. Instances are generated for the tests to analyze the computational performance of the approaches mentioned above, using data that are close to reality. Thereafter, is performed a comparison of performance and quality of the results obtained by each one of techniques used
Resumo:
The increasingly request for processing power during last years has pushed integrated circuit industry to look for ways of providing even more processing power with less heat dissipation, power consumption, and chip area. This goal has been achieved increasing the circuit clock, but since there are physical limits of this approach a new solution emerged as the multiprocessor system on chip (MPSoC). This approach demands new tools and basic software infrastructure to take advantage of the inherent parallelism of these architectures. The oil exploration industry has one of its firsts activities the project decision on exploring oil fields, those decisions are aided by reservoir simulations demanding high processing power, the MPSoC may offer greater performance if its parallelism can be well used. This work presents a proposal of a micro-kernel operating system and auxiliary libraries aimed to the STORM MPSoC platform analyzing its influence on the problem of reservoir simulation