965 resultados para Engenharia Elétrica
Resumo:
The considered work presents the procedure for evaluation of the uncertainty related to the calibration of flow measurers and to BS&W. It is about a new method of measurement purposed by the conceptual project of the laboratory LAMP, at Universidade Federal do Rio Grande do Norte, that intends to determine the conventional true value of the BS&W from the total height of the liquid column in the auditor tank, hydrostatic pressure exerted by the liquid column, local gravity, specific mass of the water and the specific mass of the oil, and, to determine the flow, from total height of liquid column and transfer time. The calibration uses a automatized system of monitoration and data acquisition of some necessary largnesses to determine of flow and BS&W, allowing a better trustworthiness of through measurements
Resumo:
Wavelet coding has emerged as an alternative coding technique to minimize the fading effects of wireless channels. This work evaluates the performance of wavelet coding, in terms of bit error probability, over time-varying, frequency-selective multipath Rayleigh fading channels. The adopted propagation model follows the COST207 norm, main international standards reference for GSM, UMTS, and EDGE applications. The results show the wavelet coding s efficiency against the inter symbolic interference which characterizes these communication scenarios. This robustness of the presented technique enables its usage in different environments, bringing it one step closer to be applied in practical wireless communication systems
Resumo:
Nowadays, optic fiber is one of the most used communication methods, mainly due to the fact that the data transmission rates of those systems exceed all of the other means of digital communication. Despite the great advantage, there are problems that prevent full utilization of the optical channel: by increasing the transmission speed and the distances involved, the data is subjected to non-linear inter symbolic interference caused by the dispersion phenomena in the fiber. Adaptive equalizers can be used to solve this problem, they compensate non-ideal responses of the channel in order to restore the signal that was transmitted. This work proposes an equalizer based on artificial neural networks and evaluates its performance in optical communication systems. The proposal is validated through a simulated optic channel and the comparison with other adaptive equalization techniques
Resumo:
The use of Field Programmable Gate Array (FPGA) for development of digital control strategies for power electronics applications has aroused a growing interest of many researchers. This interest is due to the great advantages offered by FPGA, which include: lower design effort, high performance and highly flexible prototyping. This work proposes the development and implementation of an unified one-cycle controller for boost CFP rectifier based on FPGA. This controller can be applied to a total of twelve converters, six inverters and six rectifiers defined by four single phase VSI topologies and three voltage modulation types. The topologies considered in this work are: full-bridge, interleaved full-bridge, half-bridge and interleaved half-bridge. While modulations are classified in bipolar voltage modulation (BVM), unipolar voltage modulation (UVM) and clamped voltage modulation (CVM). The proposed project is developed and prototyped using tools Matlab/Simulink® together with the DSP Builder library provided by Altera®. The proposed controller was validated with simulation and experimental results
Resumo:
No espaço tridimensional, um corpo rígido qualquer pode efetuar translações e ou rotações em relação a cada um de seus eixos. Identificar com precisão o deslocamento realizado por um corpo é fundamental para alguns tipos de sistemas em engenharia. Em sistemas de navegação inercial tradicionais, utilizam-se acelerômetros para reconhecer a aceleração linear e giroscópios para reconhecer a velocidade angular registrada durante o deslocamento. O giroscópio, entretanto, é um dispositivo de custo mais elevado e com alto consumo de energia quando comparado a um acelerômetro. Essa desvantagem deu origem a pesquisas a respeito de sistemas e unidades de medidas inerciais que não utilizam giroscópios. A ideia de utilizar apenas acelerômetros para calcular o movimento linear e angular surgiu no início da década de 60 e vem se desenvolvendo através de modelos que variam no número de sensores, na maneira como estes são organizados e no modelo matemático que é utilizado para derivar o movimento do corpo. Esse trabalho propõe um esquema de configuração para construção de uma unidade de medida inercial que utiliza três acelerômetros triaxiais. Para identificar o deslocamento de um corpo rígido a partir deste esquema, foi utilizado um modelo matemático que utiliza apenas os nove sinais de aceleração extraídos dos três sensores. A proposta sugere que os sensores sejam montados e distribuídos em formato de L . Essa disposição permite a utilização de um único plano do sistema de coordenadas, facilitando assim a instalação e configuração destes dispositivos e possibilitando a implantação dos sensores em uma única placa de circuito integrado. Os resultados encontrados a partir das simulações iniciais demonstram a viabilidade da utilização do esquema de configuração proposto
Resumo:
This work holds the purpose of presenting an auxiliary way of bone density measurement through the attenuation of electromagnetic waves. In order to do so, an arrangement of two microstrip antennas with rectangular configuration has been used, operating in a frequency of 2,49 GHz, and fed by a microstrip line on a substrate of fiberglass with permissiveness of 4.4 and height of 0,9 cm. Simulations were done with silica, bone meal, silica and gypsum blocks samples to prove the variation on the attenuation level of different combinations. Because of their good reproduction of the human beings anomaly aspects, samples of bovine bone were used. They were subjected to weighing, measurement and microwave radiation. The samples had their masses altered after mischaracterization and the process was repeated. The obtained data were inserted in a neural network and its training was proceeded with the best results gathered by correct classification on 100% of the samples. It comes to the conclusion that through only one non-ionizing wave in the 2,49 GHz zone it is possible to evaluate the attenuation level in the bone tissue, and that with the appliance of neural network fed with obtained characteristics in the experiment it is possible to classify a sample as having low or high bone density
Resumo:
This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better
Resumo:
This work intends to show a new and few explored SLAM approach inside the simultaneous localization and mapping problem (SLAM). The purpose is to put a mobile robot to work in an indoor environment. The robot should map the environment and localize itself in the map. The robot used in the tests has an upward camera and encoders on the wheels. The landmarks in this built map are light splotches on the images of the camera caused by luminaries on the ceil. This work develops a solution based on Extended Kalman Filter to the SLAM problem using a developed observation model. Several developed tests and softwares to accomplish the SLAM experiments are shown in details
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Resumo:
The control of industrial processes has become increasingly complex due to variety of factory devices, quality requirement and market competition. Such complexity requires a large amount of data to be treated by the three levels of process control: field devices, control systems and management softwares. To use data effectively in each one of these levels is extremely important to industry. Many of today s industrial computer systems consist of distributed software systems written in a wide variety of programming languages and developed for specific platforms, so, even more companies apply a significant investment to maintain or even re-write their systems for different platforms. Furthermore, it is rare that a software system works in complete isolation. In industrial automation is common that, software had to interact with other systems on different machines and even written in different languages. Thus, interoperability is not just a long-term challenge, but also a current context requirement of industrial software production. This work aims to propose a middleware solution for communication over web service and presents an user case applying the solution developed to an integrated system for industrial data capture , allowing such data to be available simplified and platformindependent across the network
Resumo:
The area of research and development involving the PID tune of controllers is an active area in the academic and industrial sectors yet. All this due to the wide use of PID controllers in the industry (96% of all controllers in the industry is still PID). Controllers well tuned and tools to monitor their performance over time with the possibility of selftuning, become an item almost obligatory to maintain processes with high productivity and low cost. In a globalized world, it is essential for their self survival. Although there are several new tools and techniques that make PID tune, in this paper will explore the PID tune using the relay method, due its good acceptance in the industrial environment. In addition, we will discuss some techniques for evaluation of control loops, as IAE, ISE, Goodhart, the variation of the control signal and index Harris, which are necessary to propose new tuning for control loops that have a low performance. Will be proposed in this paper a tool for tuning and self tuning PID. Will be proposed in this paper a PID auto-tuning software using a relay method. In particular, will be highlighted the relay method with hysteresis. This method has shown tunings with satisfactory performance when applied to the didactic, simulated and real plants
Resumo:
Frequency selective surfaces (Frequency Selective Surface - FSS) are often used in various applications in telecommunications. Some of these applications may require that these structures have response with multiple resonance bands. Other applications require that the FSS response have large frequency range, to meet the necessary requirements. FSS to design with these features there are numerous techniques cited in the scientific literature. Thus, the purpose of this paper is to examine some common techniques such as: Overlap of FSS; Elements combined; Elements Elements convolucionados and fractals. And designing multiband FSS and / or broadband selecting simple ways in terms of construction and occupy the smallest possible space, aiming at practical applications. Given these requirements, three projects FSS were performed: a technology applied to IEEE 802.11 a/b/g/n and two projects for application in UWB. In project development, commercial software Ansoft DesignerTM and experimental results were satisfactory was used
Resumo:
We propose in this work a software architecture for robotic boats intended to act in diverse aquatic environments, fully autonomously, performing telemetry to a base station and getting this mission to be accomplished. This proposal aims to apply within the project N-Boat Lab NatalNet DCA, which aims to empower a sailboat navigating autonomously. The constituent components of this architecture are the memory modules, strategy, communication, sensing, actuation, energy, security and surveillance, making these systems the boat and base station. To validate the simulator was developed in C language and implemented using the graphics API OpenGL resources, whose main results were obtained in the implementation of memory, performance and strategy modules, more specifically data sharing, control of sails and rudder and planning short routes based on an algorithm for navigation, respectively. The experimental results, shown in this study indicate the feasibility of the actual use of the software architecture developed and their application in the area of autonomous mobile robotics
Resumo:
There is a growing need to develop new tools to help end users in tasks related to the design, monitoring, maintenance and commissioning of critical infrastructures. The complexity of the industrial environment, for example, requires that these tools have flexible features in order to provide valuable data for the designers at the design phases. Furthermore, it is known that industrial processes have stringent requirements for dependability, since failures can cause economic losses, environmental damages and danger to people. The lack of tools that enable the evaluation of faults in critical infrastructures could mitigate these problems. Accordingly, the said work presents developing a framework for analyzing of dependability for critical infrastructures. The proposal allows the modeling of critical infrastructure, mapping its components to a Fault Tree. Then the mathematical model generated is used for dependability analysis of infrastructure, relying on the equipment and its interconnections failures. Finally, typical scenarios of industrial environments are used to validate the proposal
Resumo:
This work presents the specification and the implementation of a language of Transformations in definite Models specification MOF (Meta Object Facility) of OMG (Object Management Group). The specification uses a boarding based on rules ECA (Event-Condition-Action) and was made on the basis of a set of scenes of use previously defined. The Parser Responsible parser for guaranteeing that the syntactic structure of the language is correct was constructed with the tool JavaCC (Java Compiler Compiler) and the description of the syntax of the language was made with EBNF (Extended Backus-Naur Form). The implementation is divided in three parts: the creation of the interpretative program properly said in Java, the creation of an executor of the actions specified in the language and its integration with the type of considered repository (generated for tool DSTC dMOF). A final prototype was developed and tested in the scenes previously defined