103 resultados para estabilidade ao processamento e armazenamento
Resumo:
In this Thesis, the development of the dynamic model of multirotor unmanned aerial vehicle with vertical takeoff and landing characteristics, considering input nonlinearities and a full state robust backstepping controller are presented. The dynamic model is expressed using the Newton-Euler laws, aiming to obtain a better mathematical representation of the mechanical system for system analysis and control design, not only when it is hovering, but also when it is taking-off, or landing, or flying to perform a task. The input nonlinearities are the deadzone and saturation, where the gravitational effect and the inherent physical constrains of the rotors are related and addressed. The experimental multirotor aerial vehicle is equipped with an inertial measurement unit and a sonar sensor, which appropriately provides measurements of attitude and altitude. A real-time attitude estimation scheme based on the extended Kalman filter using quaternions was developed. Then, for robustness analysis, sensors were modeled as the ideal value with addition of an unknown bias and unknown white noise. The bounded robust attitude/altitude controller were derived based on globally uniformly practically asymptotically stable for real systems, that remains globally uniformly asymptotically stable if and only if their solutions are globally uniformly bounded, dealing with convergence and stability into a ball of the state space with non-null radius, under some assumptions. The Lyapunov analysis technique was used to prove the stability of the closed-loop system, compute bounds on control gains and guaranteeing desired bounds on attitude dynamics tracking errors in the presence of measurement disturbances. The controller laws were tested in numerical simulations and in an experimental hexarotor, developed at the UFRN Robotics Laboratory
Resumo:
abstract
Resumo:
ln this work, it was deveIoped a parallel cooperative genetic algorithm with different evolution behaviors to train and to define architectures for MuItiIayer Perceptron neural networks. MuItiIayer Perceptron neural networks are very powerful tools and had their use extended vastIy due to their abiIity of providing great resuIts to a broad range of appIications. The combination of genetic algorithms and parallel processing can be very powerful when applied to the Iearning process of the neural network, as well as to the definition of its architecture since this procedure can be very slow, usually requiring a lot of computational time. AIso, research work combining and appIying evolutionary computation into the design of neural networks is very useful since most of the Iearning algorithms deveIoped to train neural networks only adjust their synaptic weights, not considering the design of the networks architecture. Furthermore, the use of cooperation in the genetic algorithm allows the interaction of different populations, avoiding local minima and helping in the search of a promising solution, acceIerating the evolutionary process. Finally, individuaIs and evolution behavior can be exclusive on each copy of the genetic algorithm running in each task enhancing the diversity of populations
Resumo:
Embedded systems are widely spread nowadays. An example is the Digital Signal Processor (DSP), which is a high processing power device. This work s contribution consist of exposing DSP implementation of the system logic for detecting leaks in real time. Among the various methods of leak detection available today this work uses a technique based on the pipe pressure analysis and usesWavelet Transform and Neural Networks. In this context, the DSP, in addition to do the pressure signal digital processing, also communicates to a Global Positioning System (GPS), which helps in situating the leak, and to a SCADA, sharing information. To ensure robustness and reliability in communication between DSP and SCADA the Modbus protocol is used. As it is a real time application, special attention is given to the response time of each of the tasks performed by the DSP. Tests and leak simulations were performed using the structure of Laboratory of Evaluation of Measurement in Oil (LAMP), at Federal University of Rio Grande do Norte (UFRN)
Resumo:
Nowadays, where the market competition requires products with better quality and a constant search for cost savings and a better use of raw materials, the research for more efficient control strategies becomes vital. In Natural Gas Processin Units (NGPUs), as in the most chemical processes, the quality control is accomplished through their products composition. However, the chemical composition analysis has a long measurement time, even when performed by instruments such as gas chromatographs. This fact hinders the development of control strategies to provide a better process yield. The natural gas processing is one of the most important activities in the petroleum industry. The main economic product of a NGPU is the liquefied petroleum gas (LPG). The LPG is ideally composed by propane and butane, however, in practice, its composition has some contaminants, such as ethane and pentane. In this work is proposed an inferential system using neural networks to estimate the ethane and pentane mole fractions in LPG and the propane mole fraction in residual gas. The goal is to provide the values of these estimated variables in every minute using a single multilayer neural network, making it possibly to apply inferential control techniques in order to monitor the LPG quality and to reduce the propane loss in the process. To develop this work a NGPU was simulated in HYSYS R software, composed by two distillation collumns: deethanizer and debutanizer. The inference is performed through the process variables of the PID controllers present in the instrumentation of these columns. To reduce the complexity of the inferential neural network is used the statistical technique of principal component analysis to decrease the number of network inputs, thus forming a hybrid inferential system. It is also proposed in this work a simple strategy to correct the inferential system in real-time, based on measurements of the chromatographs which may exist in process under study
Resumo:
Previous works have studied the characteristics and peculiarities of P2P networks, especially security information aspects. Most works, in some way, deal with the sharing of resources and, in particular, the storage of files. This work complements previous studies and adds new definitions relating to this kind of systems. A system for safe storage of files (SAS-P2P) was specified and built, based on P2P technology, using the JXTA platform. This system uses standard X.509 and PKCS # 12 digital certificates, issued and managed by a public key infrastructure, which was also specified and developed based on P2P technology (PKIX-P2P). The information is stored in a special file with XML format which is especially prepared, facilitating handling and interoperability among applications. The intention of developing the SAS-P2P system was to offer a complementary service for Giga Natal network users, through which the participants in this network can collaboratively build a shared storage area, with important security features such as availability, confidentiality, authenticity and fault tolerance. Besides the specification, development of prototypes and testing of the SAS-P2P system, tests of the PKIX-P2P Manager module were also performed, in order to determine its fault tolerance and the effective calculation of the reputation of the certifying authorities participating in the system
Resumo:
This work presents a description of models development at DigSILENT PowerFactoryT M program for the transient stability study in power systems with wind turbine. The main goal is to make available means to use a dynamic simulation program in power systems, widely published, and utilize it as a tool that helps in programs results evaluations used for this intent. The process of simulations and analyses results starts after the models setting description phase. The results obtained by the DigSILENT PowerFactoryT M and ATP, program chosen to the validation also international recognized, are compared during this phase. The main tools and guide lines of PowerFactoryT M program use are presented here, directing these elements to the solution of the approached problem. For the simulation it is used a real system which it will be connected a wind farm. Two different technologies of wind turbines were implemented: doublyfed induction generator with frequency converter, connecting the rotor to the stator and to the grid, and synchronous wind generator with frequency converter, interconnecting the generator to the grid. Besides presenting the basic conceptions of dynamic simulation, it is described the implemented control strategies and models of turbine and converters. The stability of the wind turbine interconnected to grid is analyzed in many operational conditions, resultant of diverse kinds of disturbances
Resumo:
This work presents a description of models development at DigSILENT PowerFactoryTM program for the transient stability study in power systems with wind turbine. The main goal is to make available means to use a dynamic simulation program in power systems, widely published, and utilize it as a tool that helps in programs results evaluations used for this intent. The process of simulations and analyses results starts after the models setting description phase. The results obtained by the DigSILENT PowerFactoryTM and ATP, program chosen to the validation also international recognized, are compared during this phase. The main tools and guide lines of PowerFactoryTM program use are presented here, directing these elements to the solution of the approached problem. For the simulation it is used a real system which it will be connected a wind farm. Two different technologies of wind turbines were implemented: doubly-fed induction generator with frequency converter, connecting the rotor to the stator and to the grid, and synchronous wind generator with frequency converter, interconnecting the generator to the grid. Besides presenting the basic conceptions of dynamic simulation, it is described the implemented control strategies and models of turbine and converters. The stability of the wind turbine interconnected to grid is analyzed in many operational conditions, resultant of diverse kinds of disturbances
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
The investigation of viability to use containers for Natural Gas Vehicle (NGV) storage, with different geometries of commercial standards, come from necessity to join the ambient, financial and technological benefits offered by the gas combustion, to the convenience of not modify the original proposal of the automobile. The use of these current cylindrical models for storage in the converted vehicles is justified by the excellent behavior that this geometry presents about the imposed tensions for the high pressure that the related reservoirs are submitted. However, recent research directed toward application of adsorbent materials in the natural gas reservoirs had proven a substantial redusction of pressure and, consequently, a relief of the tensions in the reservoirs. However, this study considers alternative geometries for NGV reservoirs, searching the minimization of dimensions and weight, remaining capacity to resist the tensions imposed by the new pressure situation. The proposed reservoirs parameters are calculated through a mathematical study of the internal pressure according to Brazilian standards (NBR) for pressure vessels. Finally simulations of the new geometries behavior are carried through using a commercially avaible Finite Element Method (FEM) software package ALGOR® to verify of the reservoirs efficincy under the gas pressure load
Resumo:
The obtaining of ceramic materials from polymeric precursors is subject of numerous studies due to lower energy costs compared to conventional processing. The aim of this study is to investigate and improve the mechanism for obtaining ceramic matrix composite (CMC) based on SiOC/Al2O3/TiC by pyrolysis of polysiloxane in the presence of an active filler and inert filler in the pyrolysis temperature lower than the usually adopted for this technique, with greater strength. It also investigates the influence of pyrolysis temperature, the content of Alas active filler, the presence of infiltrating agents (Al, glass and polymer) after pyrolysis, temperature and infiltration time on some physical and mechanical properties. Alumina is used as inert filler and Al and Ti as active filler in the pyrolysis. Aluminum, glass and polysiloxane are used as agents infiltrating the post-pyrolysis. The results are analyzed with respect to porosity and bulk density by the Archimedes method, the presence of crystalline phases by X-ray diffraction (XRD) and microstructure by scanning electron microscopy (SEM). The ceramic pyrolyzed between 850 °C 1400 °C contain porosity 15% to 33%, density 2.34 g/cm3 and flexural strength at 4 points from 30 to 42 MPa. The microstructure features are porous, with an array of Al2O3 reinforced by TiC particles and AlTi3. The infiltration post-pyrolysis reveals decrease in porosity and increase density and strength. The composites have potential applications where thermal stability is the main requirement
Resumo:
The use of polymer based coatings is a promising approach to reduce the corrosion problem in carbon steel pipes used for the transport of oil and gas in the oil industry. However, conventional polymer coatings offer limited properties, which often cannot meet design requirements for this type of application, particularly in regard to use temperature and wear resistance. Polymer nanocomposites are known to exhibit superior properties and, therefore, offer great potential for this type of application. Nevertheless, the degree of enhancement of a particular property is greatly dependent upon the matrix/nanoparticle material system used, the matrix/nanoparticle interfacial bonding and also the state of dispersion of the nanoparticle in the polymer matrix. The objective of the present research is to develop and characterize polymer based nanocomposites to be used as coatings in metallic pipelines for the transportation of oil and natural gas. Epoxy/SiO2 nanocomposites with nanoparticle contents of 2, 4, and 8 wt % were processed using a high-energy mill. Modifications of the SiO2 nanoparticles‟ surfaces with two different silane agents were carried out and their effect on the material properties were investigated. The state of dispersion of the materials processed was studied using Scanning and Transmission Electron Microscopy (SEM and TEM) micrographs. Thermogravimetric analysis (TG) were also conducted to determine the thermal stability of the nanocomposites. In addition, the processed nanocomposites were characterized by dynamic mechanical analysis (DMA) to investigate the effect of nanoparticles content and silane treatment on the viscoelastic properties and on the glass transition temperature. Finally, wear tests of the pin-on-disc type were carried out to determine the effects of the nanoparticles and the silane treatments studied. According to the results, the addition of SiO2 nanoparticles treated with silane increased the thermal stability, the storage modulus and Tg of the epoxy resin and decreased wear rate. This confirms that the interaction between the nanoparticles and the polymer chains plays a critical role on the properties of the nanocomposites
Resumo:
The limits to inform is about the character stico of basic, quimica, mineralogical and mechaniques of matlaughed material used in the manufacturing process the product certified in economic region the Cariri, specifically in the city of Crato, Ceará state, motivated the development of this work, since in this region the exist ing economic context that a general appear as important in the production chains. Were made twentyfive soils-test specimen collection and the study was performed to differentiate the mat laugh materials of variaveis processing of mathing raw materials in the factory The product mica monkeys by extrusion and pressing. The results were obtained ap s as analyzes: grain size, index of plasticity, fluoresce incidence X-ray difration the X-ray, and analyzes thermicals and properties technological. through s of curves gresifica returned to was a comparison between the retro the linear, absorb to water, porosity and bulk density. the results show that the excellent distribution and character acceptable available for the processing of the structure color dark red. needing, therefore, of the mixture of a less plastic clay with thick granulation, that works as plasticity reducer. In spite of the different resignation forms for prensagem and extrusion, the characteristics of absorption of water and rupture tension the flexing was shown inside of the patterns of ABNT
Resumo:
To obtain a process stability and a quality weld bead it is necessary an adequate parameters set: base current and time, pulse current and pulse time, because these influence the mode of metal transfer and the weld quality in the MIG-P, sometimes requiring special sources with synergistic modes with external control for this stability. This work aims to analyze and compare the effects of pulse parameters and droplet size in arc stability in MIG-P, four packets of pulse parameters were analysed: Ip = 160 A, tp = 5.7 ms; Ip = 300 A and tp = 2 ms, Ip = 350 A, tp = 1.2 ms and Ip = 350 A, tp = 0.8 ms. Each was analyzed with three different drop diameters: drop with the same diameter of the wire electrode; droplet diameter larger drop smaller than the diameter of the wire electrode. For purposes of comparison the same was determined relation between the average current and welding speed was determined generating a constant (Im / Vs = K) for all parameters. Welding in flat plate by simple deposition for the MIG-P with a distance beak contact number (DBCP) constant was perfomed subsequently making up welding in flat plate by simple deposition with an inclination of 10 degrees to vary the DBCP, where by assessment on how the MIG-P behaved in such a situation was possible, in addition to evaluating the MIG-P with adaptive control, in order to maintain a constant arc stability. Also high speed recording synchronized with acquiring current x voltage (oscillogram) was executed for better interpretation of the transfer mechanism and better evaluation in regard to the study of the stability of the process. It is concluded that parameters 3 and 4 exhibited greater versatility; diameters drop equal to or slightly less than the diameter of the wire exhibited better stability due to their higher frequency of detachment, and the detachment of the drop base does not harm the maintenance the height of the arc