1000 resultados para Pavimentos : Deformação : Instrumentação
Resumo:
The pumping of fluids in pipelines is the most economic and safe form of transporting fluids. That explains why in Europe there was in 1999 about 30.000 Km [7] of pipelines of several diameters, transporting millíons of cubic meters of crude oil end refined products, belonging to COCAWE (assaciation of companies of petroleum of Europe for health, environment and safety, that joint several petroleum companies). In Brazil they are about 18.000 Km of pipelines transporting millions of cubic meters of liquids and gases. In 1999, nine accidents were registered to COCAWE. Among those accidents one brought a fatal victim. The oil loss was of 171 m3, equivalent to O,2 parts per million of the total of the transported volume. Same considering the facts mentioned the costs involved in ao accident can be high. An accident of great proportions can bríng loss of human lives, severe environmental darnages, loss of drained product, loss . for dismissed profit and damages to the image of the company high recovery cost. In consonance with that and in some cases for legal demands, the companies are, more and more, investing in systems of Leak detection in pipelines based on computer algorithm that operate in real time, seeking wíth that to minimize still more the drained volumes. This decreases the impacts at the environment and the costs. In general way, all the systems based on softWare present some type of false alarm. In general a commitment exists betWeen the sensibílity of the system and the number of false alarms. This work has as objective make a review of thé existent methods and to concentrate in the analysis of a specific system, that is, the system based on hydraulic noise, Pressure Point Analyzis (PPA). We will show which are the most important aspects that must be considered in the implementation of a Leak Detection System (LDS), from the initial phase of the analysis of risks passing by the project bases, design, choice of the necessary field instrumentation to several LDS, implementation and tests. We Will make na analysis of events (noises) originating from the flow system that can be generator of false alarms and we will present a computer algorithm that restricts those noises automatically
Resumo:
The industries are getting more and more rigorous, when security is in question, no matter is to avoid financial damages due to accidents and low productivity, or when it s related to the environment protection. It was thinking about great world accidents around the world involving aircrafts and industrial process (nuclear, petrochemical and so on) that we decided to invest in systems that could detect fault and diagnosis (FDD) them. The FDD systems can avoid eventual fault helping man on the maintenance and exchange of defective equipments. Nowadays, the issues that involve detection, isolation, diagnose and the controlling of tolerance fault are gathering strength in the academic and industrial environment. It is based on this fact, in this work, we discuss the importance of techniques that can assist in the development of systems for Fault Detection and Diagnosis (FDD) and propose a hybrid method for FDD in dynamic systems. We present a brief history to contextualize the techniques used in working environments. The detection of fault in the proposed system is based on state observers in conjunction with other statistical techniques. The principal idea is to use the observer himself, in addition to serving as an analytical redundancy, in allowing the creation of a residue. This residue is used in FDD. A signature database assists in the identification of system faults, which based on the signatures derived from trend analysis of the residue signal and its difference, performs the classification of the faults based purely on a decision tree. This FDD system is tested and validated in two plants: a simulated plant with coupled tanks and didactic plant with industrial instrumentation. All collected results of those tests will be discussed
Resumo:
The purpose of this study was to develop a pilot plant which the main goal is to emulate a flow peak pressure in a separation vessel. Effect similar that is caused by the production in a slug flow in production wells equipped with the artificial lift method plunger lift. The motivation for its development was the need to test in a plant on a smaller scale, a new technique developed to estimate the gas flow in production wells equipped with plunger lift. To develop it, studies about multiphase flow effects, operation methods of artificial lift in plunger lift wells, industrial instrumentation elements, control valves, vessel sizing separators and measurement systems were done. The methodology used was the definition of process flowcharts, its parameters and how the effects needed would be generated for the success of the experiments. Therefore, control valves, the design and construction of vessels and the acquisition of other equipment used were defined. One of the vessels works as a tank of compressed air that is connected to the separation vessel and generates pulses of gas controlled by a on/off valve. With the emulator system ready, several control experiments were made, being the control of peak flow pressure generation and the flow meter the main experiments, this way, it was confirmed the efficiency of the plant usage in the problem that motivated it. It was concluded that the system is capable of generate effects of flow with peak pressure in a primary separation vessel. Studies such as the estimation of gas flow at the exit of the vessel and several academic studies can be done and tested on a smaller scale and then applied in real plants, avoiding waste of time and money.
Resumo:
Nowadays, where the market competition requires products with better quality and a constant search for cost savings and a better use of raw materials, the research for more efficient control strategies becomes vital. In Natural Gas Processin Units (NGPUs), as in the most chemical processes, the quality control is accomplished through their products composition. However, the chemical composition analysis has a long measurement time, even when performed by instruments such as gas chromatographs. This fact hinders the development of control strategies to provide a better process yield. The natural gas processing is one of the most important activities in the petroleum industry. The main economic product of a NGPU is the liquefied petroleum gas (LPG). The LPG is ideally composed by propane and butane, however, in practice, its composition has some contaminants, such as ethane and pentane. In this work is proposed an inferential system using neural networks to estimate the ethane and pentane mole fractions in LPG and the propane mole fraction in residual gas. The goal is to provide the values of these estimated variables in every minute using a single multilayer neural network, making it possibly to apply inferential control techniques in order to monitor the LPG quality and to reduce the propane loss in the process. To develop this work a NGPU was simulated in HYSYS R software, composed by two distillation collumns: deethanizer and debutanizer. The inference is performed through the process variables of the PID controllers present in the instrumentation of these columns. To reduce the complexity of the inferential neural network is used the statistical technique of principal component analysis to decrease the number of network inputs, thus forming a hybrid inferential system. It is also proposed in this work a simple strategy to correct the inferential system in real-time, based on measurements of the chromatographs which may exist in process under study
Resumo:
O presente trabalho foi realizado com o objetivo de estudar a coleta e o descarte de plantas aquáticas em diferentes locais e infestações do sistema Tietê/Paraná, no reservatório de Jupiá. A operação foi realizada com auxílio de instrumentação instalada em uma colhedora de plantas aquáticas, com sistema de GPS dotado de sinal de correção diferencial. Os tempos gastos para carregar e descarregar a colhedora foram determinados por cronometragem, e a distância do ponto final de coleta ao ponto de descarte e o tempo de deslocamento, por cronometragem e uso de GPS convencional. em algumas coletas foram demarcados polígonos, instruindo-se o operador a trabalhar exclusivamente na área correspondente. A interpretação dos resultados permitiu determinar a participação do tempo de coleta em relação ao tempo total de operação, indicando um valor significativo do ponto de vista operacional (>70%). Considerando o descarte em áreas infestadas com taboa, o deslocamento total médio foi de apenas 383 m, com gasto médio de 200,96 s. Os valores de capacidade operacional da colhedora oscilaram entre 0,23 e 1,60 ha h-1, indicando valor médio de 4,48 ha dia-1. A maior limitação à capacidade operacional associou-se à velocidade média de deslocamento, com maior agravante em áreas com altas infestações ou profundas. Considerando-se o deslocamento da colhedora, houve grande dificuldade de orientação em condições normais de operação, inviabilizando a manutenção de espaçamentos uniformes entre as faixas de coleta e sobrepondo as passagens. Conclui-se que a avaliação operacional indicou a impossibilidade de operar a colhedora sem o auxílio de um sistema de navegação que permita orientar a sua movimentação nas áreas de controle.
Resumo:
Activities that use Global Navigation Satellite System (GNSS) are countless and the most used one is the Global Positioning System (GPS) developed by the United States. In precision agriculture there are demands for static and cinematic positioning with distinct levels of accuracy for different applications; nevertheless cinematic performance data are not available as manufacturers of GPS receivers present only static performance information. For this reason it was developed an instrumented vehicle to test a methodology of performance evaluation of GPS receivers in kinematic conditions, which is representative to agricultural operations. A set of instrumentation was composed and used for collecting data under variable speed and rotation direction. Tests were conducted showing that the methodology allows to measure accuracy and precision, but improvements have to be implemented on the instrumentation equipment for long term tests.
Resumo:
Wireless sensors and actuators Networks specified by IEEE 802.15.4, are becoming increasingly being applied to instrumentation, as in instrumentation of oil wells with completion Plunger Lift type. Due to specific characteristics of the environment being installed, it s find the risk of compromising network security, and presenting several attack scenarios and the potential damage from them. It`s found the need for a more detailed security study of these networks, which calls for use of encryption algorithms, like AES-128 bits and RC6. So then it was implement the algorithms RC6 and AES-128, in an 8 bits microcontroller, and study its performance characteristics, critical for embedded applications. From these results it was developed a Hybrid Algorithm Cryptographic, ACH, which showed intermediate characteristics between the AES and RC6, more appropriate for use in applications with limitations of power consumption and memory. Also was present a comparative study of quality of security among the three algorithms, proving ACH cryptographic capability.
Resumo:
The present work has as objective to present a method of project and implementation of controllers PID, based on industrial instrumentation. An automatic system of auto-tunning of controllers PID will be presented, for systems of first and second order. The software presented in this work is applied in controlled plants by PID controllers implemented in a CLP. Software is applied to make the auto-tunning of the parameters of controller PID of plants that need this tunning. Software presents two stages, the first one is the stage of identification of the system using the least square recursive algorithm and the second is the stage of project of the parameters of controller PID using the root locus algorithm. An important fact of this work is the use of industrial instrumentation for the accomplishment of the experiments. The experiments had been carried through in controlled real plants for controllers PID implemented in the CLP. Thus has not only one resulted obtained with theoreticians experiments made with computational programs, and yes resulted obtained of real systems. The experiments had shown good results gotten with developed software
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
Hard metals are the composite developed in 1923 by Karl Schröter, with wide application because high hardness, wear resistance and toughness. It is compound by a brittle phase WC and a ductile phase Co. Mechanical properties of hardmetals are strongly dependent on the microstructure of the WC Co, and additionally affected by the microstructure of WC powders before sintering. An important feature is that the toughness and the hardness increase simultaneously with the refining of WC. Therefore, development of nanostructured WC Co hardmetal has been extensively studied. There are many methods to manufacture WC-Co hard metals, including spraying conversion process, co-precipitation, displacement reaction process, mechanochemical synthesis and high energy ball milling. High energy ball milling is a simple and efficient way of manufacturing the fine powder with nanostructure. In this process, the continuous impacts on the powders promote pronounced changes and the brittle phase is refined until nanometric scale, bring into ductile matrix, and this ductile phase is deformed, re-welded and hardened. The goal of this work was investigate the effects of highenergy milling time in the micro structural changes in the WC-Co particulate composite, particularly in the refinement of the crystallite size and lattice strain. The starting powders were WC (average particle size D50 0.87 μm) supplied by Wolfram, Berglau-u. Hutten - GMBH and Co (average particle size D50 0.93 μm) supplied by H.C.Starck. Mixing 90% WC and 10% Co in planetary ball milling at 2, 10, 20, 50, 70, 100 and 150 hours, BPR 15:1, 400 rpm. The starting powders and the milled particulate composite samples were characterized by X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) to identify phases and morphology. The crystallite size and lattice strain were measured by Rietveld s method. This procedure allowed obtaining more precise information about the influence of each one in the microstructure. The results show that high energy milling is efficient manufacturing process of WC-Co composite, and the milling time have great influence in the microstructure of the final particles, crushing and dispersing the finely WC nanometric order in the Co particles
Resumo:
The hardness test is thoroughly used in research and evaluation of materials for quality control. However, this test results are subject to uncertainties caused by the process operator in the moment of the mensuration impression diagonals make by the indenter in the sample. With this mind, an automated equipment of hardness mensuration was developed. The hardness value was obtained starting from the mensuration of plastic deformation suffered by the material to a well-known load. The material deformation was calculated through the mensuration of the difference between the progress and retreat of a diamond indenter on the used sample. It was not necessary, therefore, the manual mensuration of the diagonals, decreasing the mistake source caused by the operator. Tension graphs of versus deformation could be analyzed from data obtained by the accomplished analysis, as well as you became possible a complete observation of the whole process. Following, the hardness results calculated by the experimental apparatus were compared with the results calculated by a commercial microhardness machine with the intention of testing its efficiency. All things considered, it became possible the materials hardness mensuration through an automated method, which minimized the mistakes caused by the operator and increased the analysis reliability
Resumo:
The present work consists in the analysis of tribologycal properties of basic and multifunctional knitted fabrics. This knowledge has fundamental importance for the textile industry since it can quantify, in an objective way, the tactil. The fabrics used were characterized by friction and mechanical tests for determining the viscoelastic region, wear resistance and friction coefficient of the fabrics used. The stress-strain curve was obtained by the method Kawabata, KES-FB1. Wear tests performed with the aid of equipment Martindale. The measurement of friction coefficient, two methods were used and analyzed comparatively. The first was a method already established worldwide known as KES-FB4 and the second was an innovative method called FRICTORQ, developed by the University of Minho. These two methods were compared taking into account the relative motion between the tribologycal pairs are different from each method. While the first motion is translational, the second is rotational. It was formal that the knitted had a multifunctional fabrics tribologycal performance which was better than the basic knitted fabrics, as the viscoelastic region, was laager highlighting a multifunctional structure, with greater wear resistance mainly on the back side of the knitted fabrics and lower friction coefficient. Performing a comparative analysis between two methods used to measure the friction coefficient, it was formal that both methods were consistent in terms of results. In operational terms, the FRICTORQ showed ease of operation and increased reproducibility of results
Resumo:
Chitin and chitosan are nontoxic, biodegradable and biocompatible polymers produced by renewable natural sources with applications in diverse areas such as: agriculture, textile, pharmaceutical, cosmetics and biomaterials, such as gels, films and other polymeric membranes. Both have attracted greater interest of scientists and researchers as functional polymeric materials. In this context, the objective of this study was to take advantage of the waste of shrimp (Litopenaeus vannamei and Aristeus antennatus) and crabs (Ucides cordatus) from fairs, beach huts and restaurant in Natal/RN for the extraction of chitin and chitosan for the production of membranes by electrospinning process. The extraction was made through demineralization, deproteinization, deodorization and deacetylation. Morphological analyzes (SEM and XRD), Thermal analysis (TG and DTG), Spectroscopy in the Region of the Infrared with Transformed of Fourier (FTIR) analysis Calorimetry Differential Scanning (DSC) and mechanical tests for traction were performed. In (XRD) the semicrystalline structure of chitosan can be verified while the chitin had higher crystallinity. In the thermal analysis showed a dehydration process followed by decomposition, with similar behavior of carbonized material. Chitosan showed temperature of maximum degradation lower than chitin. In the analysis by Differential Scanning Calorimetry (DSC) the curves were coherent to the thermal events of the chitosan membranes. The results obtained with (DD) for chitosan extracted from Litopenaeus vannamei and Aristeus antennatus shrimp were (80.36 and 71.00%) and Ucides cordatus crabs was 74.65%. It can be observed that, with 70:30 solutions (v/v) (TFA/DCM), 60 and 90% CH3COOH, occurred better facilitate the formation of membranes, while 100:00 (v/v) (TFA/DCM) had formation of agglomerates. In relation to the monofilaments diameters of the chitosan membranes, it was noted that the capillary-collector distance of 10 cm and tensions of 25 and 30 kV contributed to the reduction of the diameters of membranes. It was found that the Young s modulus decreases with increasing concentration of chitosan in the membranes. 90% CH3COOH contributed to the increase in the deformation resulting in more flexible material. The membranes with 5% chitosan 70:30 (v/v) (TFA/DCM) had higher tensile strength
Resumo:
The pumping through progressing cavities system has been more and more employed in the petroleum industry. This occurs because of its capacity of elevation of highly viscous oils or fluids with great concentration of sand or other solid particles. A Progressing Cavity Pump (PCP) consists, basically, of a rotor - a metallic device similar to an eccentric screw, and a stator - a steel tube internally covered by a double helix, which may be rigid or deformable/elastomeric. In general, it is submitted to a combination of well pressure with the pressure generated by the pumping process itself. In elastomeric PCPs, this combined effort compresses the stator and generates, or enlarges, the clearance existing between the rotor and the stator, thus reducing the closing effect between their cavities. Such opening of the sealing region produces what is known as fluid slip or slippage, reducing the efficiency of the PCP pumping system. Therefore, this research aims to develop a transient three-dimensional computational model that, based on single-lobe PCP kinematics, is able to simulate the fluid-structure interaction that occurs in the interior of metallic and elastomeric PCPs. The main goal is to evaluate the dynamic characteristics of PCP s efficiency based on detailed and instantaneous information of velocity, pressure and deformation fields in their interior. To reach these goals (development and use of the model), it was also necessary the development of a methodology for generation of dynamic, mobile and deformable, computational meshes representing fluid and structural regions of a PCP. This additional intermediary step has been characterized as the biggest challenge for the elaboration and running of the computational model due to the complex kinematic and critical geometry of this type of pump (different helix angles between rotor and stator as well as large length scale aspect ratios). The processes of dynamic generation of meshes and of simultaneous evaluation of the deformations suffered by the elastomer are fulfilled through subroutines written in Fortan 90 language that dynamically interact with the CFX/ANSYS fluid dynamic software. Since a structural elastic linear model is employed to evaluate elastomer deformations, it is not necessary to use any CAE package for structural analysis. However, an initial proposal for dynamic simulation using hyperelastic models through ANSYS software is also presented in this research. Validation of the results produced with the present methodology (mesh generation, flow simulation in metallic PCPs and simulation of fluid-structure interaction in elastomeric PCPs) is obtained through comparison with experimental results reported by the literature. It is expected that the development and application of such a computational model may provide better details of the dynamics of the flow within metallic and elastomeric PCPs, so that better control systems may be implemented in the artificial elevation area by PCP
Resumo:
This research work is based, in search of reinforcement s vegetable alternative to polymer composites. The idealization of making a hybrid composite reinforced with vegetable fibers licuri with synthetic fibers is a pioneer in this area. Thus was conceived a hybrid composite laminate consisting of 05 (five) layers being 03 (three) webs of synthetic fibers of glass and E-02 (two) unidirectional fabrics of vegetable fibers licuri. In the configuration of the laminate layers have alternating distribution. The composite laminate was manufactured in Tecniplas Commerce & Industry LTD, in the form of a card through the manufacturing process of hand lay up. Licuri fibers used in making the foil were the City of Mare Island in the state of Bahia. After cooking and the idealization of the hybrid composite laminate, the objective of this research work has focused on evaluating the performance of the mechanical properties (ultimate strength, stiffness and elongation at break) through uniaxial tensile tests and three point bending. Comparative studies of the mechanical properties and as well as among other types of laminated hybrid composites studied previously, were performed. Promising results were found with respect to the mechanical properties of strength and stiffness to the hybridization process idealized here. To complement the entire study were analyzed in terms of macroscopic and microscopic characteristics of the fracture for all tests.