1000 resultados para CNPQ::ENGENHARIAS::ENGENHARIA ELETRICA::MEDIDAS ELETRICAS, MAGNETICAS E ELETRONICAS INSTRUMENTACAO


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work intends to analyze the behavior of the gas flow of plunger lift wells producing to well testing separators in offshore production platforms to aim a technical procedure to estimate the gas flow during the slug production period. The motivation for this work appeared from the expectation of some wells equipped with plunger lift method by PETROBRAS in Ubarana sea field located at Rio Grande do Norte State coast where the produced fluids measurement is made in well testing separators at the platform. The oil artificial lift method called plunger lift is used when the available energy of the reservoir is not high enough to overcome all the necessary load losses to lift the oil from the bottom of the well to the surface continuously. This method consists, basically, in one free piston acting as a mechanical interface between the formation gas and the produced liquids, greatly increasing the well s lifting efficiency. A pneumatic control valve is mounted at the flow line to control the cycles. When this valve opens, the plunger starts to move from the bottom to the surface of the well lifting all the oil and gas that are above it until to reach the well test separator where the fluids are measured. The well test separator is used to measure all the volumes produced by the well during a certain period of time called production test. In most cases, the separators are designed to measure stabilized flow, in other words, reasonably constant flow by the use of level and pressure electronic controllers (PLC) and by assumption of a steady pressure inside the separator. With plunger lift wells the liquid and gas flow at the surface are cyclical and unstable what causes the appearance of slugs inside the separator, mainly in the gas phase, because introduce significant errors in the measurement system (e.g.: overrange error). The flow gas analysis proposed in this work is based on two mathematical models used together: i) a plunger lift well model proposed by Baruzzi [1] with later modifications made by Bolonhini [2] to built a plunger lift simulator; ii) a two-phase separator model (gas + liquid) based from a three-phase separator model (gas + oil + water) proposed by Nunes [3]. Based on the models above and with field data collected from the well test separator of PUB-02 platform (Ubarana sea field) it was possible to demonstrate that the output gas flow of the separator can be estimate, with a reasonable precision, from the control signal of the Pressure Control Valve (PCV). Several models of the System Identification Toolbox from MATLAB® were analyzed to evaluate which one better fit to the data collected from the field. For validation of the models, it was used the AIC criterion, as well as a variant of the cross validation criterion. The ARX model performance was the best one to fit to the data and, this way, we decided to evaluate a recursive algorithm (RARX) also with real time data. The results were quite promising that indicating the viability to estimate the output gas flow rate from a plunger lift well producing to a well test separator, with the built-in information of the control signal to the PCV

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional control strategies used in shunt active power filters (SAPF) employs real-time instantaneous harmonic detection schemes which is usually implements with digital filters. This increase the number of current sensors on the filter structure which results in high costs. Furthermore, these detection schemes introduce time delays which can deteriorate the harmonic compensation performance. Differently from the conventional control schemes, this paper proposes a non-standard control strategy which indirectly regulates the phase currents of the power mains. The reference currents of system are generated by the dc-link voltage controller and is based on the active power balance of SAPF system. The reference currents are aligned to the phase angle of the power mains voltage vector which is obtained by using a dq phase locked loop (PLL) system. The current control strategy is implemented by an adaptive pole placement control strategy integrated to a variable structure control scheme (VS-APPC). In the VS-APPC, the internal model principle (IMP) of reference currents is used for achieving the zero steady state tracking error of the power system currents. This forces the phase current of the system mains to be sinusoidal with low harmonics content. Moreover, the current controllers are implemented on the stationary reference frame to avoid transformations to the mains voltage vector reference coordinates. This proposed current control strategy enhance the performance of SAPF with fast transient response and robustness to parametric uncertainties. Experimental results are showing for determining the effectiveness of SAPF proposed control system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the variable structure adaptive pole placement controller (VS-APPC) robustness and performance are evaluated and this algorithm is applied in a motor control system. The controller robustness evaluation will be done through simulations, where will be introduced in the system the following adversities: time delay, actuator response boundeds, disturbances, parametric variation and unmodeled dynamics. The VS-APPC will be compared with PI control, pole placement control (PPC) and adaptive pole placement controller (APPC). The VS-APPC will be simulated to track a step and a sine reference. It will be applied in a three-phase induction motor control system to track a sine signal in the stator reference frame. Simulation and experimental results will prove the efficiency and robustness of this control strategy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a theoretical and numerical analysis using the transverse resonance technique (TRT) and a proposed MTRT applied in the analysis of the dispersive characteristics of microstrip lines built on truncated isotropic and anisotropic dielectric substrates. The TRT uses the transmission lines model in the transversal section of the structure, allowing to analyze its dispersive behavior. The difference between TRT and MTRT consists basically of the resonance direction. While in the TRT the resonance is calculated in the same direction of the metallic strip normal axis, the MTRT considers the resonance in the metallic strip parallel plane. Although the application of the MTRT results in a more complex equivalent circuit, its use allows some added characterization, like longitudinal section electric mode (LSE) and longitudinal section magnetic mode (LSM), microstrips with truncated substrate, or structures with different dielectric regions. A computer program using TRT and MTRT proposed in this work is implemented for the characterization of microstrips on truncated isotropic and anisotropic substrates. In this analysis, propagating and evanescent modes are considered. Thus, it is possible to characterize both the dominant and higher order modes of the structure. Numerical results are presented for the effective permittivity, characteristic impedance and relative phase velocity for microstrip lines with different parameters and dimensions of the dielectric substrate. Agreement with the results obtained in the literature are shown, as well as experimental results. In some cases, the convergence analysis is also performed by considering the limiting conditions, like particular cases of isotropic materials or structures with dielectric of infinite size found in the literature. The numerical convergence of the formulation is also analyzed. Finally, conclusions and suggestions for the continuity of this work are presented

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho, são utilizadas a Técnica da Ressonância Transversa (TRT) e a Técnica da Ressonância Transversa Modificada (MTRT), para a determinação das freqüências dos modos ressonantes de antenas de microfita com patch quadrado, retangular e circular e com substratos isotrópicos e anisotrópicos. Para isso, é proposto um modelo da cavidade equivalente, onde a antena tipo patch retangular é representada como sendo a superposição de duas linhas infinitas em microfita, uma de largura W, representando a dimensão que expressa a largura do patch, e a outra com largura L, representando a dimensão que expressa o comprimento do patch. A avaliação da eficiência e aplicabilidade dos métodos citados é realizada comparando-se com resultados experimentais e obtidos através de outras técnicas. Três situações serão verificadas: estruturas com substrato infinito, estrutura com substrato tipo pedestal e estruturas com substrato truncado além dos limites da fita metálica. Os resultados obtidos demonstram que as técnicas de análise de onda completa utilizadas neste trabalho, por um formalismo matemático mais rigoroso, são eficientes e precisas tanto na aplicação em estruturas com substrato isotrópico como nas que possuem substrato anisotrópico. Inicialmente são consideradas apenas as estruturas com substratos isotrópicos, com diferentes constantes dielétricas, e é avaliada a influência da largura do substrato sobre as freqüências dos modos ressonantes das antenas. Posteriormente, a análise do truncamento do dielétrico é realizada para estruturas com substrato anisotrópico. Em todos os casos, os resultados experimentais, obtidos a partir da construção de protótipos, são confrontados com os obtidos a partir de simulação, utilizando as técnicas TRT e MTRT. No final, as técnicas descritas são utilizadas para antenas tipo patch circular, sendo utilizada uma técnica de equivalência para transformar a antena circular em outra quadrada ou retangular equivalente, dependendo do modo que se queira encontrar. Os resultados obtidos são então analisados, observando-se uma boa concordância e indicando a viabilidade do método. Após isso, são apresentadas as conclusões e sugeridos alguns temas para a continuidade deste trabalho

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study developed software rotines, in a system made basically from a processor board producer of signs and supervisory, wich main function was correcting the information measured by a turbine gas meter. This correction is based on the use of an intelligent algorithm formed by an artificial neural net. The rotines were implemented in the habitat of the supervisory as well as in the habitat of the DSP and have three main itens: processing, communication and supervision

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a study of implementation procedures for multiband microstrip patch antennas characterization, using on wireless communication systems. An artificial neural network multilayer perceptron is used to locate the bands of operational frequencies of the antenna for different geometrics configurations. The antenna is projected, simulated and tested in laboratory. The results obtained are compared in order to validate the performance of archetypes that resulted in a good one agreement in metric terms. The neurocomputationals procedures developed can be extended to other electromagnetic structures of wireless communications systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of the Internet stimulated the appearance of several services. An example is the communication ones present in the users day-by-day. Services as chat and e-mail reach an increasing number of users. This fact is turning the Net a powerful communication medium. The following work explores the use of communication conventional services into the Net infrastructure. We introduce the concept of communication social protocols applied to a shared virtual environment. We argue that communication tools have to be adapted to the Internet potentialities. To do that, we approach some theories of the Communication area and its applicability in a virtual environment context. We define multi-agent architecture to support the offer of these services, as well as, a software and hardware platform to support the accomplishment of experiments using Mixed Reality. Finally, we present the obtained results, experiments and products

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This graduate thesis proposes a model to asynchronously replicate heterogeneous databases. This model singularly combines -in a systematic way and in a single project -different concepts, techniques and paradigms related to the areas of database replication and management of heterogeneous databases. One of the main advantages of the replication is to allow applications to continue to process information, during time intervals when they are off the network and to trigger the database synchronization, as soon as the network connection is reestablished. Therefore, the model introduces a communication and update protocol that takes in consideration the environment of asynchronous characteristics used. As part of the work, a tool was developed in Java language, based on the model s premises in order to process, test, simulate and validate the proposed model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New multimedia applications that use the Internet as a communication media are pressing for the development of new technologies, such as: MPLS (Multiprotocol Label Switching) and DiffServ. These technologies introduce new and powerful features to the Internet backbone, as the provision of QoS (Quality of Service) capabilities. However, to obtain a true end-to-end QoS, it is not enough to implement such technologies in the network core, it becomes indispensable to extend such improvements to the access networks, what is the aim of the several works presently under development. To contribute to this process, this Thesis presents the RSVP-SVC (Resource Reservation Protocol Switched Virtual Connection) that consists in an extension of RSVP-TE. The RSVP-SVC is presented herein as a mean to support a true end-to-end QoS, through the extension of MPLS scope. Thus, it is specified a Switched Virtual Connection (SVC) service to be used in the context of a MPLS User-to-Network Interface (MPLS UNI), that is able to efficiently establish and activate Label Switched Paths (LSP), starting from the access routers that satisfy the QoS requirements demanded by the applications. The RSVP-SVC was specified in Estelle, a Formal Description Technique (FDT) standardized by ISO. The edition, compilation, verification and simulation of RSVP-SVC were made by the EDT (Estelle Development Toolset) software. The benefits and most important issues to be considered when using the proposed protocol are also included

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large efforts have been maden by the scientific community on tasks involving locomotion of mobile robots. To execute this kind of task, we must develop to the robot the ability of navigation through the environment in a safe way, that is, without collisions with the objects. In order to perform this, it is necessary to implement strategies that makes possible to detect obstacles. In this work, we deal with this problem by proposing a system that is able to collect sensory information and to estimate the possibility for obstacles to occur in the mobile robot path. Stereo cameras positioned in parallel to each other in a structure coupled to the robot are employed as the main sensory device, making possible the generation of a disparity map. Code optimizations and a strategy for data reduction and abstraction are applied to the images, resulting in a substantial gain in the execution time. This makes possible to the high level decision processes to execute obstacle deviation in real time. This system can be employed in situations where the robot is remotely operated, as well as in situations where it depends only on itself to generate trajectories (the autonomous case)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are some approaches that take advantage of unused computational resources in the Internet nodes - users´ machines. In the last years , the peer-to-peer networks (P2P) have gaining a momentum mainly due to its support for scalability and fault tolerance. However, current P2P architectures present some problems such as nodes overhead due to messages routing, a great amount of nodes reconfigurations when the network topology changes, routing traffic inside a specific network even when the traffic is not directed to a machine of this network, and the lack of a proximity relationship among the P2P nodes and the proximity of these nodes in the IP network. Although some architectures use the information about the nodes distance in the IP network, they use methods that require dynamic information. In this work we propose a P2P architecture to fix the problems afore mentioned. It is composed of three parts. The first part consists of a basic P2P architecture, called SGrid, which maintains a relationship of nodes in the P2P network with their position in the IP network. Its assigns adjacent key regions to nodes of a same organization. The second part is a protocol called NATal (Routing and NAT application layer) that extends the basic architecture in order to remove from the nodes the responsibility of routing messages. The third part consists of a special kind of node, called LSP (Lightware Super-Peer), which is responsible for maintaining the P2P routing table. In addition, this work also presents a simulator that validates the architecture and a module of the Natal protocol to be used in Linux routers