1000 resultados para Análises de rede
Resumo:
New multimedia applications that use the Internet as a communication media are pressing for the development of new technologies, such as: MPLS (Multiprotocol Label Switching) and DiffServ. These technologies introduce new and powerful features to the Internet backbone, as the provision of QoS (Quality of Service) capabilities. However, to obtain a true end-to-end QoS, it is not enough to implement such technologies in the network core, it becomes indispensable to extend such improvements to the access networks, what is the aim of the several works presently under development. To contribute to this process, this Thesis presents the RSVP-SVC (Resource Reservation Protocol Switched Virtual Connection) that consists in an extension of RSVP-TE. The RSVP-SVC is presented herein as a mean to support a true end-to-end QoS, through the extension of MPLS scope. Thus, it is specified a Switched Virtual Connection (SVC) service to be used in the context of a MPLS User-to-Network Interface (MPLS UNI), that is able to efficiently establish and activate Label Switched Paths (LSP), starting from the access routers that satisfy the QoS requirements demanded by the applications. The RSVP-SVC was specified in Estelle, a Formal Description Technique (FDT) standardized by ISO. The edition, compilation, verification and simulation of RSVP-SVC were made by the EDT (Estelle Development Toolset) software. The benefits and most important issues to be considered when using the proposed protocol are also included
Resumo:
The increasing of the number of attacks in the computer networks has been treated with the increment of the resources that are applied directly in the active routers equip-ments of these networks. In this context, the firewalls had been consolidated as essential elements in the input and output control process of packets in a network. With the advent of intrusion detectors systems (IDS), efforts have been done in the direction to incorporate packets filtering based in standards of traditional firewalls. This integration incorporates the IDS functions (as filtering based on signatures, until then a passive element) with the already existing functions in firewall. In opposite of the efficiency due this incorporation in the blockage of signature known attacks, the filtering in the application level provokes a natural retard in the analyzed packets, and it can reduce the machine performance to filter the others packets because of machine resources demand by this level of filtering. This work presents models of treatment for this problem based in the packets re-routing for analysis by a sub-network with specific filterings. The suggestion of implementa- tion of this model aims reducing the performance problem and opening a space for the consolidation of scenes where others not conventional filtering solutions (spam blockage, P2P traffic control/blockage, etc.) can be inserted in the filtering sub-network, without inplying in overload of the main firewall in a corporative network
Resumo:
In last decades, neural networks have been established as a major tool for the identification of nonlinear systems. Among the various types of networks used in identification, one that can be highlighted is the wavelet neural network (WNN). This network combines the characteristics of wavelet multiresolution theory with learning ability and generalization of neural networks usually, providing more accurate models than those ones obtained by traditional networks. An extension of WNN networks is to combine the neuro-fuzzy ANFIS (Adaptive Network Based Fuzzy Inference System) structure with wavelets, leading to generate the Fuzzy Wavelet Neural Network - FWNN structure. This network is very similar to ANFIS networks, with the difference that traditional polynomials present in consequent of this network are replaced by WNN networks. This paper proposes the identification of nonlinear dynamical systems from a network FWNN modified. In the proposed structure, functions only wavelets are used in the consequent. Thus, it is possible to obtain a simplification of the structure, reducing the number of adjustable parameters of the network. To evaluate the performance of network FWNN with this modification, an analysis of network performance is made, verifying advantages, disadvantages and cost effectiveness when compared to other existing FWNN structures in literature. The evaluations are carried out via the identification of two simulated systems traditionally found in the literature and a real nonlinear system, consisting of a nonlinear multi section tank. Finally, the network is used to infer values of temperature and humidity inside of a neonatal incubator. The execution of such analyzes is based on various criteria, like: mean squared error, number of training epochs, number of adjustable parameters, the variation of the mean square error, among others. The results found show the generalization ability of the modified structure, despite the simplification performed
Resumo:
This work proposes the specification of a new function block according to Foundation Fieldbus standards. The new block implements an artificial neural network, which may be useful in process control applications. The specification includes the definition of a main algorithm, that implements a neural network, as well as the description of some accessory functions, which provide safety characteristics to the block operation. Besides, it also describes the block attributes emphasizing its parameters, which constitute the block interfaces. Some experimental results, obtained from an artificial neural network implementation using actual standard functional blocks on a laboratorial FF network, are also shown, in order to demonstrate the possibility and also the convenience of integrating a neural network to Fieldbus devices
Resumo:
ln this work the implementation of the SOM (Self Organizing Maps) algorithm or Kohonen neural network is presented in the form of hierarchical structures, applied to the compression of images. The main objective of this approach is to develop an Hierarchical SOM algorithm with static structure and another one with dynamic structure to generate codebooks (books of codes) in the process of the image Vector Quantization (VQ), reducing the time of processing and obtaining a good rate of compression of images with a minimum degradation of the quality in relation to the original image. Both self-organizing neural networks developed here, were denominated HSOM, for static case, and DHSOM, for the dynamic case. ln the first form, the hierarchical structure is previously defined and in the later this structure grows in an automatic way in agreement with heuristic rules that explore the data of the training group without use of external parameters. For the network, the heuristic mIes determine the dynamics of growth, the pruning of ramifications criteria, the flexibility and the size of children maps. The LBO (Linde-Buzo-Oray) algorithm or K-means, one ofthe more used algorithms to develop codebook for Vector Quantization, was used together with the algorithm of Kohonen in its basic form, that is, not hierarchical, as a reference to compare the performance of the algorithms here proposed. A performance analysis between the two hierarchical structures is also accomplished in this work. The efficiency of the proposed processing is verified by the reduction in the complexity computational compared to the traditional algorithms, as well as, through the quantitative analysis of the images reconstructed in function of the parameters: (PSNR) peak signal-to-noise ratio and (MSE) medium squared error
Resumo:
This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing
Resumo:
In this work, we propose a Geographical Information System that can be used as a tool for the treatment and study of problems related with environmental and city management issues. It is based on the Scalable Vector Graphics (SVG) standard for Web development of graphics. The project uses the concept of remate and real-time mar creation by database access through instructions executed by browsers on the Internet. As a way of proving the system effectiveness, we present two study cases;.the first on a region named Maracajaú Coral Reefs, located in Rio Grande do Norte coast, and the second in the Switzerland Northeast in which we intended to promote the substitution of MapServer by the system proposed here. We also show some results that demonstrate the larger geographical data capability achieved by the use of the standardized codes and open source tools, such as Extensible Markup Language (XML), Document Object Model (DOM), script languages ECMAScript/ JavaScript, Hypertext Preprocessor (PHP) and PostgreSQL and its extension, PostGIS
Resumo:
Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations
Resumo:
The main purpose of this work is to develop an environment that allows HYSYS R chemical process simulator communication with sensors and actuators from a Foundation Fieldbus industrial network. The environment is considered a hybrid resource since it has a real portion (industrial network) and a simulated one (process) with all measurement and control signals also real. It is possible to reproduce different industrial process dynamics without being required any physical network modification, enabling simulation of some situations that exist in a real industrial environment. This feature testifies the environment flexibility. In this work, a distillation column is simulated through HYSYS R with all its variables measured and controlled by Foundation Fieldbus devices
Resumo:
It s notorious the advance of computer networks in recent decades, whether in relation to transmission rates, the number of interconnected devices or the existing applications. In parallel, it s also visible this progress in various sectors of the automation, such as: industrial, commercial and residential. In one of its branches, we find the hospital networks, which can make the use of a range of services, ranging from the simple registration of patients to a surgery by a robot under the supervision of a physician. In the context of both worlds, appear the applications in Telemedicine and Telehealth, which work with the transfer in real time of high resolution images, sound, video and patient data. Then comes a problem, since the computer networks, originally developed for the transfer of less complex data, is now being used by a service that involves high transfer rates and needs requirements for quality of service (QoS) offered by the network . Thus, this work aims to do the analysis and comparison of performance of a network when subjected to this type of application, for two different situations: the first without the use of QoS policies, and the second with the application of such policies, using as scenario for testing, the Metropolitan Health Network of the Federal University of Rio Grande do Norte (UFRN)
Resumo:
The public illumination system of Natal/RN city presents some recurring problems in the aspect of monitoring, since currently is not possible to detect in real time the light bulbs which are on throughout the day, or those which are off or burned out, at night. These factors depreciate the efficiency of the services provided, as well as, the use of energetic resources, because there is energetic waste and, consequently, financial resources that could be applied at the own public system illumination. The purpose of the work is create a prototype in substitution to the currently photoelectric relays used at public illumination, that have the same function, as well others: turn on or off the light bulbs remotely (control flexibility by the use of specifics algorithms supervisory), checking the light bulbs status (on or off) and wireless communication with the system through the ZigBee® protocol. The development steps of this product and the tests carried out are related as a way to validate and justify its use at the public illumination
Resumo:
The sanitation companies from Brazil has a great challenge for the XXI century: seek to mitigate the rate of physical waste (water, chemicals and electricity) and financial waste caused by inefficient operating systems drinking water supply, considering that currently we already face, in some cases, the scarcity of water resources. The supply systems are increasingly complex as they seek to minimize waste and at the same time better serve the growing number of users. However, this technological change is to reduce the complexity of the challenges posed by the need to include users with higher quality and efficiency in services. A major challenge for companies of water supplies is to provide a good quality service contemplating reducing expenditure on electricity. In this situation we developed a research by a method that seeks to control the pressure of the distribution systems that do not have the tank in your setup and the water comes out of the well directly to the distribution system. The method of pressure control (intelligent control) uses fuzzy logic to eliminate the waste of electricity and the leaks from the production of pumps that inject directly into the distribution system, which causes waste of energy when the consumption of households is reduced causing the saturation of the distribution system. This study was conducted at Green Club II condominium, located in the city of Parnamirim, state of Rio Grande do Norte, in order to study the pressure behavior of the output of the pump that injects water directly into the distribution system. The study was only possible because of the need we had to find a solution to some leaks in the existing distribution system and the extensions of the respective condominium residences, which sparked interest in developing a job in order to carry out the experiments contained in this research
Resumo:
Atualmente há uma grande preocupação em relação a substituição das fontes não renováveis pelas fontes renováveis na geração de energia elétrica. Isto ocorre devido a limitação do modelo tradicional e da crescente demanda. Com o desenvolvimento dos conversores de potência e a eficácia dos esquemas de controle, as fontes renováveis têm sido interligadas na rede elétrica, em um modelo de geração distribuída. Neste sentido, este trabalho apresenta uma estratégia de controle não convencional, com a utilização de um controlador robusto, para a interconexão de sistemas fotovoltaicos com à rede elétrica trifásica. A compensação da qualidade de energia no ponto de acoplamento comum (PAC) é realizada pela estratégia proposta. As técnicas tradicionais utilizam detecção de harmônicos, já neste trabalho o controle das correntes é feita de uma forma indireta sem a necessidade desta detecção. Na estratégia indireta é de grande importância que o controle da tensão do barramento CC seja efetuado de uma forma que não haja grandes flutuações, e que a banda passante do controlador em regime permanente seja baixa para que as correntes da rede não tenham um alto THD. Por este motivo é utilizado um controlador em modo dual DSM-PI, que durante o transitório se comporta como um controlador em modo deslizante SM-PI, e em regime se comporta como um PI convencional. A corrente é alinhada ao ângulo de fase do vetor tensão da rede elétrica, obtido a partir do uso de um PLL. Esta aproximação permite regular o fluxo de potência ativa, juntamente com a compensação dos harmônicos e também promover a correção do fator de potência no ponto de acoplamento comum. Para o controle das correntes é usado um controlador dupla sequencia, que utiliza o princípio do modelo interno. Resultados de simulação são apresentados para demonstrar a eficácia do sistema de controle proposto
Resumo:
Currently, there are several power converter topologies applied to wind power generation. The converters allow the use of wind turbines operating at variable speed, enabling better use of wind forces. The high performance of the converters is being increasingly demanded, mainly because of the increase in the power generation capacity by wind turbines, which gave rise to various converter topologies, such as parallel or multilevel converters. The use of converters allow effective control of the power injected into the grid, either partially, for the case using partial converter, or total control for the case of using full converter. The back-to-back converter is one of the most used topologies in the market today, due to its simple structure, with few components, contributing to robust and reliable performance. In this work, is presented the implementation of a wind cogeneration system using a permanent magnet synchronous generator (PMSG) associated with a back-to-back power converter is proposed, in order to inject active power in an electric power system. The control strategy of the active power delivered to the grid by cogeneration is based on the philosophy of indirect control
Resumo:
Hard metals are the composite developed in 1923 by Karl Schröter, with wide application because high hardness, wear resistance and toughness. It is compound by a brittle phase WC and a ductile phase Co. Mechanical properties of hardmetals are strongly dependent on the microstructure of the WC Co, and additionally affected by the microstructure of WC powders before sintering. An important feature is that the toughness and the hardness increase simultaneously with the refining of WC. Therefore, development of nanostructured WC Co hardmetal has been extensively studied. There are many methods to manufacture WC-Co hard metals, including spraying conversion process, co-precipitation, displacement reaction process, mechanochemical synthesis and high energy ball milling. High energy ball milling is a simple and efficient way of manufacturing the fine powder with nanostructure. In this process, the continuous impacts on the powders promote pronounced changes and the brittle phase is refined until nanometric scale, bring into ductile matrix, and this ductile phase is deformed, re-welded and hardened. The goal of this work was investigate the effects of highenergy milling time in the micro structural changes in the WC-Co particulate composite, particularly in the refinement of the crystallite size and lattice strain. The starting powders were WC (average particle size D50 0.87 μm) supplied by Wolfram, Berglau-u. Hutten - GMBH and Co (average particle size D50 0.93 μm) supplied by H.C.Starck. Mixing 90% WC and 10% Co in planetary ball milling at 2, 10, 20, 50, 70, 100 and 150 hours, BPR 15:1, 400 rpm. The starting powders and the milled particulate composite samples were characterized by X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) to identify phases and morphology. The crystallite size and lattice strain were measured by Rietveld s method. This procedure allowed obtaining more precise information about the influence of each one in the microstructure. The results show that high energy milling is efficient manufacturing process of WC-Co composite, and the milling time have great influence in the microstructure of the final particles, crushing and dispersing the finely WC nanometric order in the Co particles