11 resultados para tráfego de máquinas
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The number of applications based on embedded systems grows significantly every year, even with the fact that embedded systems have restrictions, and simple processing units, the performance of these has improved every day. However the complexity of applications also increase, a better performance will always be necessary. So even such advances, there are cases, which an embedded system with a single unit of processing is not sufficient to achieve the information processing in real time. To improve the performance of these systems, an implementation with parallel processing can be used in more complex applications that require high performance. The idea is to move beyond applications that already use embedded systems, exploring the use of a set of units processing working together to implement an intelligent algorithm. The number of existing works in the areas of parallel processing, systems intelligent and embedded systems is wide. However works that link these three areas to solve any problem are reduced. In this context, this work aimed to use tools available for FPGA architectures, to develop a platform with multiple processors to use in pattern classification with artificial neural networks
Resumo:
The human voice is an important communication tool and any disorder of the voice can have profound implications for social and professional life of an individual. Techniques of digital signal processing have been used by acoustic analysis of vocal disorders caused by pathologies in the larynx, due to its simplicity and noninvasive nature. This work deals with the acoustic analysis of voice signals affected by pathologies in the larynx, specifically, edema, and nodules on the vocal folds. The purpose of this work is to develop a classification system of voices to help pre-diagnosis of pathologies in the larynx, as well as monitoring pharmacological treatments and after surgery. Linear Prediction Coefficients (LPC), Mel Frequency cepstral coefficients (MFCC) and the coefficients obtained through the Wavelet Packet Transform (WPT) are applied to extract relevant characteristics of the voice signal. For the classification task is used the Support Vector Machine (SVM), which aims to build optimal hyperplanes that maximize the margin of separation between the classes involved. The hyperplane generated is determined by the support vectors, which are subsets of points in these classes. According to the database used in this work, the results showed a good performance, with a hit rate of 98.46% for classification of normal and pathological voices in general, and 98.75% in the classification of diseases together: edema and nodules
Resumo:
There are some approaches that take advantage of unused computational resources in the Internet nodes - users´ machines. In the last years , the peer-to-peer networks (P2P) have gaining a momentum mainly due to its support for scalability and fault tolerance. However, current P2P architectures present some problems such as nodes overhead due to messages routing, a great amount of nodes reconfigurations when the network topology changes, routing traffic inside a specific network even when the traffic is not directed to a machine of this network, and the lack of a proximity relationship among the P2P nodes and the proximity of these nodes in the IP network. Although some architectures use the information about the nodes distance in the IP network, they use methods that require dynamic information. In this work we propose a P2P architecture to fix the problems afore mentioned. It is composed of three parts. The first part consists of a basic P2P architecture, called SGrid, which maintains a relationship of nodes in the P2P network with their position in the IP network. Its assigns adjacent key regions to nodes of a same organization. The second part is a protocol called NATal (Routing and NAT application layer) that extends the basic architecture in order to remove from the nodes the responsibility of routing messages. The third part consists of a special kind of node, called LSP (Lightware Super-Peer), which is responsible for maintaining the P2P routing table. In addition, this work also presents a simulator that validates the architecture and a module of the Natal protocol to be used in Linux routers
Resumo:
In the two last decades of the past century, following the consolidation of the Internet as the world-wide computer network, applications generating more robust data flows started to appear. The increasing use of videoconferencing stimulated the creation of a new form of point-to-multipoint transmission called IP Multicast. All companies working in the area of software and the hardware development for network videoconferencing have adjusted their products as well as developed new solutionsfor the use of multicast. However the configuration of such different solutions is not easy done, moreover when changes in the operational system are also requirede. Besides, the existing free tools have limited functions, and the current comercial solutions are heavily dependent on specific platforms. Along with the maturity of IP Multicast technology and with its inclusion in all the current operational systems, the object-oriented programming languages had developed classes able to handle multicast traflic. So, with the help of Java APIs for network, data bases and hipertext, it became possible to the develop an Integrated Environment able to handle multicast traffic, which is the major objective of this work. This document describes the implementation of the above mentioned environment, which provides many functions to use and manage multicast traffic, functions which existed only in a limited way and just in few tools, normally the comercial ones. This environment is useful to different kinds of users, so that it can be used by common users, who want to join multimedia Internet sessions, as well as more advenced users such engineers and network administrators who may need to monitor and handle multicast traffic
Resumo:
This paper aims to design and develop a control and monitoring system of vending machines, based on a Central Processing Unit with peripheral Internet communication. Coupled with the condom vending machines, a data acquisition module will be connected to the original circuits in order to collect and send, via internet, the information to the healthy government agencies, in the form of charts and reports. In the face of this, such agencies may analyze these data and compare them with the rates of reduction, in medium or long term, of the STD/AIDS in their respective regions, after the implementation of these vending machines, together with the conventional preventing programs. Reading the methodology, this paper is about an explaining and bibliography research, with the aspect of a qualitative-quantitative methodology, presenting a deductive method of approach and an indirect documentation technique research. About the results of the tests and simulations, we concluded that the implementation of this system will have the same success in any other type of dispenser machine
Resumo:
Reinforcement learning is a machine learning technique that, although finding a large number of applications, maybe is yet to reach its full potential. One of the inadequately tested possibilities is the use of reinforcement learning in combination with other methods for the solution of pattern classification problems. It is well documented in the literature the problems that support vector machine ensembles face in terms of generalization capacity. Algorithms such as Adaboost do not deal appropriately with the imbalances that arise in those situations. Several alternatives have been proposed, with varying degrees of success. This dissertation presents a new approach to building committees of support vector machines. The presented algorithm combines Adaboost algorithm with a layer of reinforcement learning to adjust committee parameters in order to avoid that imbalances on the committee components affect the generalization performance of the final hypothesis. Comparisons were made with ensembles using and not using the reinforcement learning layer, testing benchmark data sets widely known in area of pattern classification
Resumo:
Electrical Motors transform electrical energy into mechanic energy in a relatively easy way. In some specific applications, there is a need for electrical motors to function with noncontaminated fluids, in high speed systems, under inhospitable conditions, or yet, in local of difficult access and considerable depth. In these cases, the motors with mechanical bearings are not adequate as their wear give rise to maintenance. A possible solution for these problems stems from two different alternatives: motors with magnetic bearings, that increase the length of the machine (not convenient), and the bearingless motors that aggregate compactness. Induction motors have been used more and more in research, as they confer more robustness to bearingless motors compared to other types of machines building with others motors. The research that has already been carried out with bearingless induction motors utilized prototypes that had their structures of stator/rotor modified, that differ most of the times from the conventional induction motors. The goal of this work is to study the viability of the use of conventional induction Motors for the beringless motors applications, pointing out the types of Motors of this category that can be more useful. The study uses the Finite Elements Method (FEM). As a means of validation, a conventional induction motor with squirrel-cage rotor was successfully used for the beringless motor application of the divided winding type, confirming the proposed thesis. The controlling system was implemented in a Digital Signal Processor (DSP)
Resumo:
The power system stabilizers are used to suppress low-frequency electromechanical oscillations and improve the synchronous generator stability limits. This master thesis proposes a wavelet-based power system stabilizer, composed of a new methodology for extraction and compensation of electromechanical oscillations in electrical power systems based on the scaling coefficient energy of the maximal overlap discrete wavelet transform in order to reduce the effects of delay and attenuation of conventional power system stabilizers. Moreover, the wavelet coefficient energy is used for electric oscillation detection and triggering the power system stabilizer only in fault situations. The performance of the proposed power system stabilizer was assessed with experimental results and comparison with the conventional power system stabilizer. Furthermore, the effects of the mother wavelet were also evaluated in this work
Resumo:
In this thesis we investigate physical problems which present a high degree of complexity using tools and models of Statistical Mechanics. We give a special attention to systems with long-range interactions, such as one-dimensional long-range bondpercolation, complex networks without metric and vehicular traffic. The flux in linear chain (percolation) with bond between first neighbor only happens if pc = 1, but when we consider long-range interactions , the situation is completely different, i.e., the transitions between the percolating phase and non-percolating phase happens for pc < 1. This kind of transition happens even when the system is diluted ( dilution of sites ). Some of these effects are investigated in this work, for example, the extensivity of the system, the relation between critical properties and the dilution, etc. In particular we show that the dilution does not change the universality of the system. In another work, we analyze the implications of using a power law quality distribution for vertices in the growth dynamics of a network studied by Bianconi and Barabási. It incorporates in the preferential attachment the different ability (fitness) of the nodes to compete for links. Finally, we study the vehicular traffic on road networks when it is submitted to an increasing flux of cars. In this way, we develop two models which enable the analysis of the total flux on each road as well as the flux leaving the system and the behavior of the total number of congested roads
Resumo:
This study is an analysis, on a trial basis, the fuel consumption of a Flex vehicle, operating with different mixtures of gasoline and ethanol in urban traffic, allowing more consistent results with the reality of the driver. Considering that most owners unaware of the possibility of mixing the fuel at the time of supply, thus enabling the choice of the most economically viable mixing gasoline / ethanol, resulting in lower costs and possibly a decrease in pollutant emission rates. Currently, there is a myth created by the people that supply ethanol only becomes viable if the value of not more than 70% of regular gasoline. However vehicles with this technology make it possible to operate with any percentage of mixture in the fuel tank, but today many of the owners of these vehicles do not use this feature effectively, because they ignore the possibility of mixing or the reason there is a deeper study regarding the optimal percentage of the mixture to provide a higher yield with a lower cost than proposed by the manufacturers.
Resumo:
This study is an analysis, on a trial basis, the fuel consumption of a Flex vehicle, operating with different mixtures of gasoline and ethanol in urban traffic, allowing more consistent results with the reality of the driver. Considering that most owners unaware of the possibility of mixing the fuel at the time of supply, thus enabling the choice of the most economically viable mixing gasoline / ethanol, resulting in lower costs and possibly a decrease in pollutant emission rates. Currently, there is a myth created by the people that supply ethanol only becomes viable if the value of not more than 70% of regular gasoline. However vehicles with this technology make it possible to operate with any percentage of mixture in the fuel tank, but today many of the owners of these vehicles do not use this feature effectively, because they ignore the possibility of mixing or the reason there is a deeper study regarding the optimal percentage of the mixture to provide a higher yield with a lower cost than proposed by the manufacturers.