962 resultados para FAULT TOLERANCE


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A área de Detecção de Intrusão, apesar de muito pesquisada, não responde a alguns problemas reais como níveis de ataques, dim ensão e complexidade de redes, tolerância a falhas, autenticação e privacidade, interoperabilidade e padronização. Uma pesquisa no Instituto de Informática da UFRGS, mais especificamente no Grupo de Segurança (GSEG), visa desenvolver um Sistema de Detecção de Intrusão Distribuído e com características de tolerância a falhas. Este projeto, denominado Asgaard, é a idealização de um sistema cujo objetivo não se restringe apenas a ser mais uma ferramenta de Detecção de Intrusão, mas uma plataforma que possibilite agregar novos módulos e técnicas, sendo um avanço em relação a outros Sistemas de Detecção atualmente em desenvolvimento. Um tópico ainda não abordado neste projeto seria a detecção de sniffers na rede, vindo a ser uma forma de prevenir que um ataque prossiga em outras estações ou redes interconectadas, desde que um intruso normalmente instala um sniffer após um ataque bem sucedido. Este trabalho discute as técnicas de detecção de sniffers, seus cenários, bem como avalia o uso destas técnicas em uma rede local. As técnicas conhecidas são testadas em um ambiente com diferentes sistemas operacionais, como linux e windows, mapeando os resultados sobre a eficiência das mesmas em condições diversas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There are some approaches that take advantage of unused computational resources in the Internet nodes - users´ machines. In the last years , the peer-to-peer networks (P2P) have gaining a momentum mainly due to its support for scalability and fault tolerance. However, current P2P architectures present some problems such as nodes overhead due to messages routing, a great amount of nodes reconfigurations when the network topology changes, routing traffic inside a specific network even when the traffic is not directed to a machine of this network, and the lack of a proximity relationship among the P2P nodes and the proximity of these nodes in the IP network. Although some architectures use the information about the nodes distance in the IP network, they use methods that require dynamic information. In this work we propose a P2P architecture to fix the problems afore mentioned. It is composed of three parts. The first part consists of a basic P2P architecture, called SGrid, which maintains a relationship of nodes in the P2P network with their position in the IP network. Its assigns adjacent key regions to nodes of a same organization. The second part is a protocol called NATal (Routing and NAT application layer) that extends the basic architecture in order to remove from the nodes the responsibility of routing messages. The third part consists of a special kind of node, called LSP (Lightware Super-Peer), which is responsible for maintaining the P2P routing table. In addition, this work also presents a simulator that validates the architecture and a module of the Natal protocol to be used in Linux routers

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The greater part of monitoring onshore Oil and Gas environment currently are based on wireless solutions. However, these solutions have a technological configuration that are out-of-date, mainly because analog radios and inefficient communication topologies are used. On the other hand, solutions based in digital radios can provide more efficient solutions related to energy consumption, security and fault tolerance. Thus, this paper evaluated if the Wireless Sensor Network, communication technology based on digital radios, are adequate to monitoring Oil and Gas onshore wells. Percent of packets transmitted with successful, energy consumption, communication delay and routing techniques applied to a mesh topology will be used as metrics to validate the proposal in the different routing techniques through network simulation tool NS-2

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous works have studied the characteristics and peculiarities of P2P networks, especially security information aspects. Most works, in some way, deal with the sharing of resources and, in particular, the storage of files. This work complements previous studies and adds new definitions relating to this kind of systems. A system for safe storage of files (SAS-P2P) was specified and built, based on P2P technology, using the JXTA platform. This system uses standard X.509 and PKCS # 12 digital certificates, issued and managed by a public key infrastructure, which was also specified and developed based on P2P technology (PKIX-P2P). The information is stored in a special file with XML format which is especially prepared, facilitating handling and interoperability among applications. The intention of developing the SAS-P2P system was to offer a complementary service for Giga Natal network users, through which the participants in this network can collaboratively build a shared storage area, with important security features such as availability, confidentiality, authenticity and fault tolerance. Besides the specification, development of prototypes and testing of the SAS-P2P system, tests of the PKIX-P2P Manager module were also performed, in order to determine its fault tolerance and the effective calculation of the reputation of the certifying authorities participating in the system

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Complex network analysis is a powerful tool into research of complex systems like brain networks. This work aims to describe the topological changes in neural functional connectivity networks of neocortex and hippocampus during slow-wave sleep (SWS) in animals submited to a novel experience exposure. Slow-wave sleep is an important sleep stage where occurs reverberations of electrical activities patterns of wakeness, playing a fundamental role in memory consolidation. Although its importance there s a lack of studies that characterize the topological dynamical of functional connectivity networks during that sleep stage. There s no studies that describe the topological modifications that novel exposure leads to this networks. We have observed that several topological properties have been modified after novel exposure and this modification remains for a long time. Major part of this changes in topological properties by novel exposure are related to fault tolerance

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Física - IGCE

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Building facilities have become important infrastructures for modern productive plants dedicated to services. In this context, the control systems of intelligent buildings have evolved while their reliability has evidently improved. However, the occurrence of faults is inevitable in systems conceived, constructed and operated by humans. Thus, a practical alternative approach is found to be very useful to reduce the consequences of faults. Yet, only few publications address intelligent building modeling processes that take into consideration the occurrence of faults and how to manage their consequences. In the light of the foregoing, a procedure is proposed for the modeling of intelligent building control systems, considersing their functional specifications in normal operation and in the of the event of faults. The proposed procedure adopts the concepts of discrete event systems and holons, and explores Petri nets and their extensions so as to represent the structure and operation of control systems for intelligent buildings under normal and abnormal situations. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Failure detection is at the core of most fault tolerance strategies, but it often depends on reliable communication. We present new algorithms for failure detectors which are appropriate as components of a fault tolerance system that can be deployed in situations of adverse network conditions (such as loosely connected and administered computing grids). It packs redundancy into heartbeat messages, thereby improving on the robustness of the traditional protocols. Results from experimental tests conducted in a simulated environment with adverse network conditions show significant improvement over existing solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Negli ultimi anni le Web application stanno assumendo un ruolo sempre più importante nella vita di ognuno di noi. Se fino a qualche anno fa eravamo abituati ad utilizzare quasi solamente delle applicazioni “native”, che venivano eseguite completamente all’interno del nostro Personal Computer, oggi invece molti utenti utilizzano i loro vari dispositivi quasi esclusivamente per accedere a delle Web application. Grazie alle applicazioni Web si sono potuti creare i cosiddetti social network come Facebook, che sta avendo un enorme successo in tutto il mondo ed ha rivoluzionato il modo di comunicare di molte persone. Inoltre molte applicazioni più tradizionali come le suite per ufficio, sono state trasformate in applicazioni Web come Google Docs, che aggiungono per esempio la possibilità di far lavorare più persone contemporanemente sullo stesso documento. Le Web applications stanno assumendo quindi un ruolo sempre più importante, e di conseguenza sta diventando fondamentale poter creare delle applicazioni Web in grado di poter competere con le applicazioni native, che siano quindi in grado di svolgere tutti i compiti che sono stati sempre tradizionalmente svolti dai computer. In questa Tesi ci proporremo quindi di analizzare le varie possibilità con le quali poter migliorare le applicazioni Web, sia dal punto di vista delle funzioni che esse possono svolgere, sia dal punto di vista della scalabilità. Dato che le applicazioni Web moderne hanno sempre di più la necessità di poter svolgere calcoli in modo concorrente e distribuito, analizzeremo un modello computazionale che si presta particolarmente per progettare questo tipo di software: il modello ad Attori. Vedremo poi, come caso di studio di framework per la realizzazione di applicazioni Web avanzate, il Play framework: esso si basa sulla piattaforma Akka di programmazione ad Attori, e permette di realizzare in modo semplice applicazioni Web estremamente potenti e scalabili. Dato che le Web application moderne devono avere già dalla nascita certi requisiti di scalabilità e fault tolerance, affronteremo il problema di come realizzare applicazioni Web predisposte per essere eseguite su piattaforme di Cloud Computing. In particolare vedremo come pubblicare una applicazione Web basata sul Play framework sulla piattaforma Heroku, un servizio di Cloud Computing PaaS.