24 resultados para PROCESSOR
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.
Resumo:
The transport of fluids through pipes is used in the oil industry, being the pipelines an important link in the logistics flow of fluids. However, the pipelines suffer deterioration in their walls caused by several factors which may cause loss of fluids to the environment, justifying the investment in techniques and methods of leak detection to minimize fluid loss and environmental damage. This work presents the development of a supervisory module in order to inform to the operator the leakage in the pipeline monitored in the shortest time possible, in order that the operator log procedure that entails the end of the leak. This module is a component of a system designed to detect leaks in oil pipelines using sonic technology, wavelets and neural networks. The plant used in the development and testing of the module presented here was the system of tanks of LAMP, and its LAN, as monitoring network. The proposal consists of, basically, two stages. Initially, assess the performance of the communication infrastructure of the supervisory module. Later, simulate leaks so that the DSP sends information to the supervisory performs the calculation of the location of leaks and indicate to which sensor the leak is closer, and using the system of tanks of LAMP, capture the pressure in the pipeline monitored by piezoresistive sensors, this information being processed by the DSP and sent to the supervisory to be presented to the user in real time
Resumo:
This study developed software rotines, in a system made basically from a processor board producer of signs and supervisory, wich main function was correcting the information measured by a turbine gas meter. This correction is based on the use of an intelligent algorithm formed by an artificial neural net. The rotines were implemented in the habitat of the supervisory as well as in the habitat of the DSP and have three main itens: processing, communication and supervision
Resumo:
In academia, it is common to create didactic processors, facing practical disciplines in the area of Hardware Computer and can be used as subjects in software platforms, operating systems and compilers. Often, these processors are described without ISA standard, which requires the creation of compilers and other basic software to provide the hardware / software interface and hinder their integration with other processors and devices. Using reconfigurable devices described in a HDL language allows the creation or modification of any microarchitecture component, leading to alteration of the functional units of data path processor as well as the state machine that implements the control unit even as new needs arise. In particular, processors RISP enable modification of machine instructions, allowing entering or modifying instructions, and may even adapt to a new architecture. This work, as the object of study addressing educational soft-core processors described in VHDL, from a proposed methodology and its application on two processors with different complexity levels, shows that it s possible to tailor processors for a standard ISA without causing an increase in the level hardware complexity, ie without significant increase in chip area, while its level of performance in the application execution remains unchanged or is enhanced. The implementations also allow us to say that besides being possible to replace the architecture of a processor without changing its organization, RISP processor can switch between different instruction sets, which can be expanded to toggle between different ISAs, allowing a single processor become adaptive hybrid architecture, which can be used in embedded systems and heterogeneous multiprocessor environments
Resumo:
RFID (Radio Frequency Identification) identifies object by using the radio frequency which is a non-contact automatic identification technique. This technology has shown its powerful practical value and potential in the field of manufacturing, retailing, logistics and hospital automation. Unfortunately, the key problem that impacts the application of RFID system is the security of the information. Recently, researchers have demonstrated solutions to security threats in RFID technology. Among these solutions are several key management protocols. This master dissertations presents a performance evaluation of Neural Cryptography and Diffie-Hellman protocols in RFID systems. For this, we measure the processing time inherent in these protocols. The tests was developed on FPGA (Field-Programmable Gate Array) platform with Nios IIr embedded processor. The research methodology is based on the aggregation of knowledge to development of new RFID systems through a comparative analysis between these two protocols. The main contributions of this work are: performance evaluation of protocols (Diffie-Hellman encryption and Neural) on embedded platform and a survey on RFID security threats. According to the results the Diffie-Hellman key agreement protocol is more suitable for RFID systems
Resumo:
Embedded systems are widely spread nowadays. An example is the Digital Signal Processor (DSP), which is a high processing power device. This work s contribution consist of exposing DSP implementation of the system logic for detecting leaks in real time. Among the various methods of leak detection available today this work uses a technique based on the pipe pressure analysis and usesWavelet Transform and Neural Networks. In this context, the DSP, in addition to do the pressure signal digital processing, also communicates to a Global Positioning System (GPS), which helps in situating the leak, and to a SCADA, sharing information. To ensure robustness and reliability in communication between DSP and SCADA the Modbus protocol is used. As it is a real time application, special attention is given to the response time of each of the tasks performed by the DSP. Tests and leak simulations were performed using the structure of Laboratory of Evaluation of Measurement in Oil (LAMP), at Federal University of Rio Grande do Norte (UFRN)
Resumo:
This work deals with the development of an experimental study on a power supply of high frequency that provides the toch plasmica to be implemented in PLASPETRO project, which consists of two static converters developed by using Insulated Gate Bipolar Transistor (IGBT). The drivers used to control these keys are triggered by Digital Signal Processor (DSP) through optical fibers to reduce problems with electromagnetic interference (EMI). The first stage consists of a pre-regulator in the form of an AC to DC converter with three-phase boost power factor correction which is the main theme of this work, while the second is the source of high frequency itself. A series-resonant inverter consists of four (4) cell inverters operating in a frequency around 115 kHz each one in soft switching mode, alternating itself to supply the load (plasma torch) an alternating current with a frequency of 450 kHz. The first stage has the function of providing the series-resonant inverter a DC voltage, with the value controlled from the power supply provided by the electrical system of the utility, and correct the power factor of the system as a whole. This level of DC bus voltage at the output of the first stage will be used to control the power transferred by the inverter to the load, and it may vary from 550 VDC to a maximum of 800 VDC. To control the voltage level of DC bus driver used a proportional integral (PI) controller and to achieve the unity power factor it was used two other proportional integral currents controllers. Computational simulations were performed to assist in sizing and forecasting performance. All the control and communications needed to stage supervisory were implemented on a DSP
Resumo:
This work treats of an implementation OFDMA baseband processor in hardware for LTE Downlink. The LTE or Long Term Evolution consist the last stage of development of the technology called 3G (Mobile System Third Generation) which offers an increasing in data rate and more efficiency and flexibility in transmission with application of advanced antennas and multiple carriers techniques. This technology applies in your physical layer the OFDMA technical (Orthogonal Frequency Division Multiple Access) for generation of signals and mapping of physical resources in downlink and has as base theoretical to OFDM multiple carriers technique (Orthogonal Frequency Division Multiplexing). With recent completion of LTE specifications, different hardware solutions have been developed, mainly, to the level symbol processing where the implementation of OFDMA processor in base band is commonly considered, because it is also considered a basic architecture of others important applications. For implementation of processor, the reconfigurable hardware offered by devices as FPGA are considered which shares not only to meet the high requirements of flexibility and adaptability of LTE as well as offers possibility of an implementation quick and efficient. The implementation of processor in reconfigurable hardware meets the specifications of LTE physical layer as well as have the flexibility necessary for to meet others standards and application which use OFDMA processor as basic architecture for your systems. The results obtained through of simulation and verification functional system approval the functionality and flexibility of processor implemented
Resumo:
Relevant researches have been growing on electric machine without mancal or bearing and that is generally named bearingless motor or specifically, mancal motor. In this paper it is made an introductory presentation about bearingless motor and its peripherical devices with focus on the design and implementation of sensors and interfaces needed to control rotor radial positioning and rotation of the machine. The signals from the machine are conditioned in analogic inputs of DSP TMS320F2812 and used in the control program. This work has a purpose to elaborate and build a system with sensors and interfaces suitable to the input and output of DSP TMS320F2812 to control a mancal motor, bearing in mind the modularity, simplicity of circuits, low number of power used, good noise imunity and good response frequency over 10 kHz. The system is tested at a modified ordinary induction motor of 3,7 kVA to be used with a bearingless motor with divided coil
Resumo:
Electrical Motors transform electrical energy into mechanic energy in a relatively easy way. In some specific applications, there is a need for electrical motors to function with noncontaminated fluids, in high speed systems, under inhospitable conditions, or yet, in local of difficult access and considerable depth. In these cases, the motors with mechanical bearings are not adequate as their wear give rise to maintenance. A possible solution for these problems stems from two different alternatives: motors with magnetic bearings, that increase the length of the machine (not convenient), and the bearingless motors that aggregate compactness. Induction motors have been used more and more in research, as they confer more robustness to bearingless motors compared to other types of machines building with others motors. The research that has already been carried out with bearingless induction motors utilized prototypes that had their structures of stator/rotor modified, that differ most of the times from the conventional induction motors. The goal of this work is to study the viability of the use of conventional induction Motors for the beringless motors applications, pointing out the types of Motors of this category that can be more useful. The study uses the Finite Elements Method (FEM). As a means of validation, a conventional induction motor with squirrel-cage rotor was successfully used for the beringless motor application of the divided winding type, confirming the proposed thesis. The controlling system was implemented in a Digital Signal Processor (DSP)
Resumo:
A challenge that remains in the robotics field is how to make a robot to react in real time to visual stimulus. Traditional computer vision algorithms used to overcome this problem are still very expensive taking too long when using common computer processors. Very simple algorithms like image filtering or even mathematical morphology operations may take too long. Researchers have implemented image processing algorithms in high parallelism hardware devices in order to cut down the time spent in the algorithms processing, with good results. By using hardware implemented image processing techniques and a platform oriented system that uses the Nios II Processor we propose an approach that uses the hardware processing and event based programming to simplify the vision based systems while at the same time accelerating some parts of the used algorithms
Resumo:
This work considers the development of a filtering system composed of an intelligent algorithm, that separates information and noise coming from sensors interconnected by Foundation Fieldbus (FF) network. The algorithm implementation will be made through FF standard function blocks, with on-line training through OPC (OLE for Process Control), and embedded technology in a DSP (Digital Signal Processor) that interacts with the fieldbus devices. The technique ICA (Independent Component Analysis), that explores the possibility of separating mixed signals based on the fact that they are statistically independent, was chosen to this Blind Source Separation (BSS) process. The algorithm and its implementations will be Presented, as well as the results
Resumo:
As an auxiliary tool to combat hunger by decreasing the waste of food and contributing for improvement of life quality on the population, CEASA/RN has released from August/03 to August/05 the program MESA DA SOLIDARIEDADE. Despite of the positive results of this program, that has already distributed around 226 tons of food, there is still food being thrown in the trash as the deliver of the same food in its natural form would be a health risk to those who would consume it and only the correct processing of this food can make it edible. This work has as a goal the reuse of solid residues of vegetal origin generated by the CEASA/RN, through the Program MESA DA SOLIDARIEDADE and the characterization of the product obtained so it might be used as a mineral complement in the human diet. To the collecting of samples (from September until December /2004) it was developed a methodology having as a reference the daily needs of mineral salts for infants at the age of seven to ten. The sample was packed in plastic bags and transported in an ambient temperature to the laboratory where it was selected, weighted, disinfected, fractionated and dried to 70ºC in greenhouse. The dry sample was shredded and stored in bottles previously sterilized. The sample in nature was weighted in the same proportion of the dry sample and it was obtained a uniform mass in a domestic processor. The physical-chemical analyses were carried out in triplicate in the samples in nature and in the dry product, being analyzed: pH, humidity, acidity and soluble solids according to IAL (1985), mineral salts contents (Ca, K, Na, Mg, P and Fe) determined by spectrophotometry of Atomic Absorption, caloric power through a calorimetric bomb and presence of fecal traces and E. coli through the colilert method (APHA, 1995). During this period the dry food a base of vegetables presented on average 5,06% of humidity, 4,62 of pH, acidity of 2,73 mg of citric acid /100g of sample, 51,45ºBrix of soluble solids, 2.323,50mg of K/100g, 299,06mg of Ca/100g, 293mg of Na/100g, 154,66mg of Mg/100g, 269,62mg of P/100g, 6,38mg of Fe/100g, caloric power of 3,691Kcal/g (15,502KJ/g) and is free of contamination by fecal traces and E..coli. The dry food developed in this research presented satisfactory characteristics regarding to its conservation, possessing low calories, constituting itself a good source of potassium, magnesium, sodium and iron that can be utilized as a food complement of these minerals
Resumo:
The waste in the industries of escargot processing is very big. This is composed basically of escargot meat out of the commercialization patterns and the visceras. In this context, there is a need to take advantage to the use of these sub-products. A possibility should be drying them and transforming them in a certain form to be reused. Than, the present work has the objective of studying the reutilization of the sub-products of the escargot industrialization for by means of drying process. The samples were transformed in pastes, through a domestic processor for approximately 1 minute and compacted in trays of aluminum without perforations with three different heights (5 mm, 10 mm and 15 mm). The drying was accomplished in a tray dryer with air circulation and transverse flow at a speed of 0,2 m/s and three temperature levels (70°C, 80°C and 90ºC). A drying kinetics study was accomplished for the obtained curves and for the heat and mass transfer coefficients using experimental procedures based in an experimental planning of 22 factorial type. Microbiological and physiochemical analysis were also accomplished for the in nature and the dehydrated sub-products. In the drying process, it was observed the great importance of the external resistances to the mass transfer and heat in the period of constant tax influenced by the temperature. The evaporation taxes indicated a mixed control of the mass transfer for the case of the thickest layers. As already expected, the drying constant behavior was influenced by the temperature and thickness of the medium, increasing and decreasing. The statistical analysis of the results, in agreement with the factorial planning 22, showed that the fissures, the shrinking of the transfer area and the formation of a crust on the surface might have contributed to the differences between the practical results and the linear model proposed. The temperature and the thickness influenced significantly in the answers of the studied variables: evaporation tax and drying constant. They were obtained significant statistical models and predictive ones for evaporation tax for the meat as well as for the visceras
Resumo:
The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform