831 resultados para network-based intrusion detection system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Com o intuito de utilizar uma rede com protocolo IP para a implementação de malhas fechadas de controle, este trabalho propõe-se a realizar um estudo da operação de um sistema de controle dinâmico distribuído, comparando-o com a operação de um sistema de controle local convencional. Em geral, a decisão de projetar uma arquitetura de controle distribuído é feita baseada na simplicidade, na redução dos custos e confiabilidade; portanto, um diferencial bastante importante é a utilização da rede IP. O objetivo de uma rede de controle não é transmitir dados digitais, mas dados analógicos amostrados. Assim, métricas usuais em redes de computadores, como quantidade de dados e taxa de transferências, tornam-se secundárias em uma rede de controle. São propostas técnicas para tratar os pacotes que sofrem atrasos e recuperar o desempenho do sistema de controle através da rede IP. A chave para este método é realizar a estimação do conteúdo dos pacotes que sofrem atrasos com base no modelo dinâmico do sistema, mantendo o sistema com um nível adequado de desempenho. O sistema considerado é o controle de um manipulador antropomórfico com dois braços e uma cabeça de visão estéreo totalizando 18 juntas. Os resultados obtidos mostram que se pode recuperar boa parte do desempenho do sistema.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This manuscript describes the development and validation of an ultra-fast, efficient, and high throughput analytical method based on ultra-high performance liquid chromatography (UHPLC) equipped with a photodiode array (PDA) detection system, for the simultaneous analysis of fifteen bioactive metabolites: gallic acid, protocatechuic acid, (−)-catechin, gentisic acid, (−)-epicatechin, syringic acid, p-coumaric acid, ferulic acid, m-coumaric acid, rutin, trans-resveratrol, myricetin, quercetin, cinnamic acid and kaempferol, in wines. A 50-mm column packed with 1.7-μm particles operating at elevated pressure (UHPLC strategy) was selected to attain ultra-fast analysis and highly efficient separations. In order to reduce the complexity of wine extract and improve the recovery efficiency, a reverse-phase solid-phase extraction (SPE) procedure using as sorbent a new macroporous copolymer made from a balanced ratio of two monomers, the lipophilic divinylbenzene and the hydrophilic N-vinylpyrrolidone (Oasis™ HLB), was performed prior to UHPLC–PDA analysis. The calibration curves of bioactive metabolites showed good linearity within the established range. Limits of detection (LOD) and quantification (LOQ) ranged from 0.006 μg mL−1 to 0.58 μg mL−1, and from 0.019 μg mL−1 to 1.94 μg mL−1, for gallic and gentisic acids, respectively. The average recoveries ± SD for the three levels of concentration tested (n = 9) in red and white wines were, respectively, 89 ± 3% and 90 ± 2%. The repeatability expressed as relative standard deviation (RSD) was below 10% for all the metabolites assayed. The validated method was then applied to red and white wines from different geographical origins (Azores, Canary and Madeira Islands). The most abundant component in the analysed red wines was (−)-epicatechin followed by (−)-catechin and rutin, whereas in white wines syringic and p-coumaric acids were found the major phenolic metabolites. The method was completely validated, providing a sensitive analysis for bioactive phenolic metabolites detection and showing satisfactory data for all the parameters tested. Moreover, was revealed as an ultra-fast approach allowing the separation of the fifteen bioactive metabolites investigated with high resolution power within 5 min.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Shrimp farming is one of the activities that contribute most to the growth of global aquaculture. However, this business has undergone significant economic losses due to the onset of viral diseases such as Infectious Myonecrosis (IMN). The IMN is already widespread throughout Northeastern Brazil and affects other countries such as Indonesia, Thailand and China. The main symptom of disease is myonecrosis, which consists of necrosis of striated muscles of the abdomen and cephalothorax of shrimp. The IMN is caused by infectious myonecrosis virus (IMNV), a non-enveloped virus which has protrusions along its capsid. The viral genome consists of a single molecule of double-stranded RNA and has two Open Reading Frames (ORFs). The ORF1 encodes the major capsid protein (MCP) and a potential RNA binding protein (RBP). ORF2 encodes a probable RNA-dependent RNA polymerase (RdRp) and classifies IMNV in Totiviridae family. Thus, the objective of this research was study the IMNV complete genome and encoded proteins in order to develop a system differentiate virus isolates based on polymorphisms presence. The phylogenetic relationship among some totivirus was investigated and showed a new group to IMNV within Totiviridae family. Two new genomes were sequenced, analyzed and compared to two other genomes already deposited in GenBank. The new genomes were more similar to each other than those already described. Conserved and variable regions of the genome were identified through similarity graphs and alignments using the four IMNV sequences. This analyze allowed mapping of polymorphic sites and revealed that the most variable region of the genome is in the first half of ORF1, which coincides with the regions that possibly encode the viral protrusion, while the most stable regions of the genome were found in conserved domains of proteins that interact with RNA. Moreover, secondary structures were predicted for all proteins using various softwares and protein structural models were calculated using threading and ab initio modeling approaches. From these analyses was possible to observe that the IMNV proteins have motifs and shapes similar to proteins of other totiviruses and new possible protein functions have been proposed. The genome and proteins study was essential for development of a PCR-based detection system able to discriminate the four IMNV isolates based on the presence of polymorphic sites

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new procedure was developed in this study, based on a system equipped with a cellulose membrane and a tetraethylenepentamine hexaacetate chelator (MD-TEPHA) for in situ characterization of the lability of metal species in aquatic systems. To this end, the DM-TEPHA system was prepared by adding TEPHA chelator to cellulose bags pre-purified with 1.0 mol L-1 of HCl and NaOH solutions. After the MD-TEPHA system was sealed, it was examined in the laboratory to evaluate the influence of complexation time (0-24 h), pH (3.0, 4.0, 5.0, 6.0 and 7.0), metal ions (Cu, Cd, Fe, Mn and Ni) and concentration of organic matter (15, 30 and 60 mg L-1) on the relative lability of metal species by TEPHA chelator. The results showed that Fe and Cu metals were complexed more slowly by TEPHA chelator in the MD-TEPHA system than were Cd, Ni and Mn in all pH used. It was also found that the pH strongly influences the process of metal complexation by the MD-TEPHA system. At all the pH levels, Cd, Mn and Ni showed greater complexation with TEPHA chelator (recovery of about 95-75%) than did Cu and Fe metals. Time also affects the lability of metal species complexed by aquatic humic substances (AHS); while Cd, Ni and Mn showed a faster kinetics, reaching equilibrium after about 100 min, and Cu and Fe approached equilibrium after 400 min. Increasing the AHS concentration decreases the lability of metal species by shifting the equilibrium to AHS-metal complexes. Our results indicate that the system under study offers an interesting alternative that can be applied to in situ experiments for differentiation of labile and inert metal species in aquatic systems. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A lógica fuzzy admite infinitos valores lógicos intermediários entre o falso e o verdadeiro. Com esse princípio, foi elaborado neste trabalho um sistema baseado em regras fuzzy, que indicam o índice de massa corporal de animais ruminantes com objetivo de obter o melhor momento para o abate. O sistema fuzzy desenvolvido teve como entradas as variáveis massa e altura, e a saída um novo índice de massa corporal, denominado Índice de Massa Corporal Fuzzy (IMC Fuzzy), que poderá servir como um sistema de detecção do momento de abate de bovinos, comparando-os entre si através das variáveis linguísticas )Muito BaixaM, ,BaixaB, ,MédiaM, ,AltaA e Muito AltaM. Para a demonstração e aplicação da utilização deste sistema fuzzy, foi feita uma análise de 147 vacas da raça Nelore, determinando os valores do IMC Fuzzy para cada animal e indicando a situação de massa corpórea de todo o rebanho. A validação realizada do sistema foi baseado em uma análise estatística, utilizando o coeficiente de correlação de Pearson 0,923, representando alta correlação positiva e indicando que o método proposto está adequado. Desta forma, o presente método possibilita a avaliação do rebanho, comparando cada animal do rebanho com seus pares do grupo, fornecendo desta forma um método quantitativo de tomada de decisão para o pecuarista. Também é possível concluir que o presente trabalho estabeleceu um método computacional baseado na lógica fuzzy capaz de imitar parte do raciocínio humano e interpretar o índice de massa corporal de qualquer tipo de espécie bovina e em qualquer região do País.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

T'his dissertation proposes alternative models to allow the interconnectioin of the data communication networks of COSERN Companhia Energética do Rio Grande do Norte. These networks comprise the oorporative data network, based on TCP/IP architecture, and the automation system linking remote electric energy distribution substations to the main Operatin Centre, based on digital radio links and using the IEC 60870-5-101 protoco1s. The envisaged interconnection aims to provide automation data originated from substations with a contingent route to the Operation Center, in moments of failure or maintenance of the digital radio links. Among the presented models, the one chosen for development consists of a computational prototype based on a standard personal computer, working under LINUX operational system and running na application, developesd in C language, wich functions as a Gateway between the protocols of the TCP/IP stack and the IEC 60870-5-101 suite. So, it is described this model analysis, implementation and tests of functionality and performance. During the test phase it was basically verified the delay introduced by the TCP/IP network when transporting automation data, in order to guarantee that it was cionsistent with the time periods present on the automation network. Besides , additional modules are suggested to the prototype, in order to handle other issues such as security and prioriz\ation of the automation system data, whenever they are travesing the TCP/IP network. Finally, a study hás been done aiming to integrate, in more complete way, the two considered networks. It uses IP platform as a solution of convergence to the communication subsystem of na unified network, as the most recente market tendencies for supervisory and other automation systems indicate

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pumping of fluids in pipelines is the most economic and safe form of transporting fluids. That explains why in Europe there was in 1999 about 30.000 Km [7] of pipelines of several diameters, transporting millíons of cubic meters of crude oil end refined products, belonging to COCAWE (assaciation of companies of petroleum of Europe for health, environment and safety, that joint several petroleum companies). In Brazil they are about 18.000 Km of pipelines transporting millions of cubic meters of liquids and gases. In 1999, nine accidents were registered to COCAWE. Among those accidents one brought a fatal victim. The oil loss was of 171 m3, equivalent to O,2 parts per million of the total of the transported volume. Same considering the facts mentioned the costs involved in ao accident can be high. An accident of great proportions can bríng loss of human lives, severe environmental darnages, loss of drained product, loss . for dismissed profit and damages to the image of the company high recovery cost. In consonance with that and in some cases for legal demands, the companies are, more and more, investing in systems of Leak detection in pipelines based on computer algorithm that operate in real time, seeking wíth that to minimize still more the drained volumes. This decreases the impacts at the environment and the costs. In general way, all the systems based on softWare present some type of false alarm. In general a commitment exists betWeen the sensibílity of the system and the number of false alarms. This work has as objective make a review of thé existent methods and to concentrate in the analysis of a specific system, that is, the system based on hydraulic noise, Pressure Point Analyzis (PPA). We will show which are the most important aspects that must be considered in the implementation of a Leak Detection System (LDS), from the initial phase of the analysis of risks passing by the project bases, design, choice of the necessary field instrumentation to several LDS, implementation and tests. We Will make na analysis of events (noises) originating from the flow system that can be generator of false alarms and we will present a computer algorithm that restricts those noises automatically

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes the development of an intelligent system for analysis of digital mammograms, capable to detect and to classify masses and microcalcifications. The digital mammograms will be pre-processed through techniques of digital processing of images with the purpose of adapting the image to the detection system and automatic classification of the existent calcifications in the suckles. The model adopted for the detection and classification of the mammograms uses the neural network of Kohonen by the algorithm Self Organization Map - SOM. The algorithm of Vector quantization, Kmeans it is also used with the same purpose of the SOM. An analysis of the performance of the two algorithms in the automatic classification of digital mammograms is developed. The developed system will aid the radiologist in the diagnosis and accompaniment of the development of abnormalities

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a methodology to analyze transient stability (first oscillation) of electric energy systems, using a neural network based on ART architecture (adaptive resonance theory), named fuzzy ART-ARTMAP neural network for real time applications. The security margin is used as a stability analysis criterion, considering three-phase short circuit faults with a transmission line outage. The neural network operation consists of two fundamental phases: the training and the analysis. The training phase needs a great quantity of processing for the realization, while the analysis phase is effectuated almost without computation effort. This is, therefore the principal purpose to use neural networks for solving complex problems that need fast solutions, as the applications in real time. The ART neural networks have as primordial characteristics the plasticity and the stability, which are essential qualities to the training execution and to an efficient analysis. The fuzzy ART-ARTMAP neural network is proposed seeking a superior performance, in terms of precision and speed, when compared to conventional ARTMAP, and much more when compared to the neural networks that use the training by backpropagation algorithm, which is a benchmark in neural network area. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a neural network based on the ART architecture ( adaptive resonance theory), named fuzzy ART& ARTMAP neural network, applied to the electric load-forecasting problem. The neural networks based on the ARTarchitecture have two fundamental characteristics that are extremely important for the network performance ( stability and plasticity), which allow the implementation of continuous training. The fuzzy ART& ARTMAP neural network aims to reduce the imprecision of the forecasting results by a mechanism that separate the analog and binary data, processing them separately. Therefore, this represents a reduction on the processing time and improved quality of the results, when compared to the Back-Propagation neural network, and better to the classical forecasting techniques (ARIMA of Box and Jenkins methods). Finished the training, the fuzzy ART& ARTMAP neural network is capable to forecast electrical loads 24 h in advance. To validate the methodology, data from a Brazilian electric company is used. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform