886 resultados para Peer to peer networks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the factors facilitating the transfer admission of students broadly classified as Black from a single community college into a selective engineering college. The work aims to further research on STEM preparation and performance for students of color, as well as scholarship on increasing access to four-year institutions from two-year schools. Factors illuminating Underrepresented Racial and Ethnic Minority (URM) student pathways through Science, Technology, Engineering, and Mathematics (STEM) degree programs have often been examined through large-scale quantitative studies. However, this qualitative study complements quantitative data through demographic questionnaires, as well as semi-structured individual and group. The backgrounds and voices of diverse Black transfer students in four-year engineering degree programs were captured through these methods. Major findings from this research include evidence that community college faculty, peer networks, and family members facilitated transfer. Other results distinguish Black African from Black American transfers; included in these distinctions are depictions of different K-12 schooling experiences and differences in how participants self-identified. The findings that result from this research build upon the few studies that account for expanded dimensions of student diversity within the Black population. Among other demographic data, participants’ countries of birth and years of migration to the U.S. (if applicable) are included. Interviews reveal participants’ perceptions of factors impacting their educational trajectories in STEM and subsequent ability to transfer into a competitive undergraduate engineering program. This study is inclusive of, and reveals an important shifting demographic within the United States of America, Black Africans, who represent one of the fastest-growing segments of the immigrant population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This book documents and evaluates the growing consumer revolution against digital copyright law, and makes a unique theoretical contribution to the debate surrounding this issue. With a focus on recent US copyright law, the book charts the consumer rebellion against the Sonny Bono Copyright Term Extension Act 1998 (US) and the Digital Millennium Copyright Act 1998 (US). The author explores the significance of key judicial rulings and considers legal controversies over new technologies, such as the iPod, TiVo, Sony Playstation II, Google Book Search, and peer-to-peer networks. The book also highlights cultural developments, such as the emergence of digital sampling and mash-ups, the construction of the BBC Creative Archive, and the evolution of the Creative Commons. Digital Copyright and the Consumer Revolution will be of prime interest to academics, law students and lawyers interested in the ramifications of copyright law, as well as policymakers given its focus upon recent legislative developments and reform proposals. The book will also appeal to librarians, information managers, creative artists, consumers, technology developers, and other users of copyright material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new technology – 3D printing – has the potential to make radical changes to aspects of the way in which we live. Put simply, it allows people to download designs and turn them into physical objects by laying down successive layers of material. Replacements or parts for household objects such as toys, utensils and gadgets could become available at the press of a button. With this innovation, however, comes the need to consider impacts on a wide range of forms of intellectual property, as Dr Matthew Rimmer explains. 3D Printing is the latest in a long line of disruptive technologies – including photocopiers, cassette recorders, MP3 players, personal computers, peer to peer networks, and wikis – which have challenged intellectual property laws, policies, practices, and norms. As The Economist has observed, ‘Tinkerers with machines that turn binary digits into molecules are pioneering a whole new way of making things—one that could well rewrite the rules of manufacturing in much the same way as the PC trashed the traditional world of computing.’

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer to peer networks are being used extensively nowadays for file sharing, video on demand and live streaming. For IPTV, delay deadlines are more stringent compared to file sharing. Coolstreaming was the first P2P IPTV system. In this paper, we model New Coolstreaming (newer version of Coolstreaming) via a queueing network. We use two time scale decomposition of Markov chains to compute the stationary distribution of number of peers and the expected number of substreams in the overlay which are not being received at the required rate due to parent overloading. We also characterize the end-to-end delay encountered by a video packet received by a user and originated at the server. Three factors contribute towards the delay. The first factor is the mean shortest path length between any two overlay peers in terms of overlay hops of the partnership graph which is shown to be O (log n) where n is the number of peers in the overlay. The second factor is the mean number of routers between any two overlay neighbours which is seen to be at most O (log N-I) where N-I is the number of routers in the internet. Third factor is the mean delay at a router in the internet. We provide an approximation of this mean delay E W]. Thus, the mean end to end delay in New Coolstreaming is shown to be upper bounded by O (log E N]) (log N-I) E (W)] where E N] is the mean number of peers at a channel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hoje em dia, distribuições de grandes volumes de dados por redes TCP/IP corporativas trazem problemas como a alta utilização da rede e de servidores, longos períodos para conclusão e maior sensibilidade a falhas na infraestrutura de rede. Estes problemas podem ser reduzidos com utilização de redes par-a-par (P2P). O objetivo desta dissertação é analisar o desempenho do protocolo BitTorrent padrão em redes corporativas e também realizar a análise após uma modificação no comportamento padrão do protocolo BitTorrent. Nesta modificação, o rastreador identifica o endereço IP do par que está solicitando a lista de endereços IP do enxame e envia somente aqueles pertencentes à mesma rede local e ao semeador original, com o objetivo de reduzir o tráfego em redes de longa distância. Em cenários corporativos típicos, as simulações mostraram que a alteração é capaz de reduzir o consumo médio de banda e o tempo médio dos downloads, quando comparados ao BitTorrent padrão, além de conferir maior robustez à distribuição em casos de falhas em enlaces de longa distância. As simulações mostraram também que em ambientes mais complexos, com muitos clientes, e onde a restrição de banda em enlaces de longa distância provoca congestionamento e descartes, o desempenho do protocolo BitTorrent padrão pode ser semelhante a uma distribuição em arquitetura cliente-servidor. Neste último caso, a modificação proposta mostrou resultados consistentes de melhoria do desempenho da distribuição.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fast forward error correction codes are becoming an important component in bulk content delivery. They fit in naturally with multicast scenarios as a way to deal with losses and are now seeing use in peer to peer networks as a basis for distributing load. In particular, new irregular sparse parity check codes have been developed with provable average linear time performance, a significant improvement over previous codes. In this paper, we present a new heuristic for generating codes with similar performance based on observing a server with an oracle for client state. This heuristic is easy to implement and provides further intuition into the need for an irregular heavy tailed distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present new, simple, efficient data structures for approximate reconciliation of set differences, a useful standalone primitive for peer-to-peer networks and a natural subroutine in methods for exact reconciliation. In the approximate reconciliation problem, peers A and B respectively have subsets of elements SA and SB of a large universe U. Peer A wishes to send a short message M to peer B with the goal that B should use M to determine as many elements in the set SB–SA as possible. To avoid the expense of round trip communication times, we focus on the situation where a single message M is sent. We motivate the performance tradeoffs between message size, accuracy and computation time for this problem with a straightforward approach using Bloom filters. We then introduce approximation reconciliation trees, a more computationally efficient solution that combines techniques from Patricia tries, Merkle trees, and Bloom filters. We present an analysis of approximation reconciliation trees and provide experimental results comparing the various methods proposed for approximate reconciliation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Working memory neural networks are characterized which encode the invariant temporal order of sequential events. Inputs to the networks, called Sustained Temporal Order REcurrent (STORE) models, may be presented at widely differing speeds, durations, and interstimulus intervals. The STORE temporal order code is designed to enable all emergent groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its 2-D aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor --> ART 2 --> STORE Model --> ART 2 --> Outstar Network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of the Internet and in particular of social networks has supposedly given a new view to the different aspects that surround human behavior. It includes those associated with addictions, but specifically the ones that have to do with technologies. Following a correlational descriptive design we present the results of a study, which involved university students from Social and Legal Sciences as participants, about their addiction to the Internet and in particular to social networks. The sample was conformed of 373 participants from the cities of Granada, Sevilla, Málaga, and Córdoba. To gather the data a questionnaire that was design by Young was translated to Spanish. The main research objective was to determine if university students could be considered social network addicts. The most prominent result was that the participants don’t consider themselves to be addicted to the Internet or to social networks; in particular women reflected a major distance from the social networks. It’s important to know that the results differ from those found in the literature review, which opens the question, are the participants in a phase of denial towards the addiction?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick “repairs,” which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Healing algorithms play a crucial part in distributed peer-to-peer networks where failures occur continuously and frequently. Whereas there are approaches for robustness that rely largely on built-in redundancy, we adopt a responsive approach that is more akin to that of biological networks e.g. the brain. The general goal of self-healing distributed graphs is to maintain certain network properties while recovering from failure quickly and making bounded alterations locally. Several self-healing algorithms have been suggested in the recent literature [IPDPS'08, PODC'08, PODC'09, PODC'11]; they heal various network properties while fulfilling competing requirements such as having low degree increase while maintaining connectivity, expansion and low stretch of the network. In this work, we augment the previous algorithms by adding the notion of edge-preserving self-healing which requires the healing algorithm to not delete any edges originally present or adversarialy inserted. This reflects the cost of adding additional edges but more importantly it immediately follows that edge preservation helps maintain any subgraph induced property that is monotonic, in particular important properties such as graph and subgraph densities. Density is an important network property and in certain distributed networks, maintaining it preserves high connectivity among certain subgraphs and backbones. We introduce a general model of self-healing, and introduce xheal+, an edge-preserving version of xheal[PODC'11]. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:



We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletesan arbitrary node from the network, then the network responds by quickly adding a small number of new edges.

We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than O(log Delta) times its original diameter, where Delta is the maximum degree of the network initially. We note that for many peer-to-peer systems, Delta is polylogarithmic, so the diameter increase would be a O(loglog n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions,without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been - in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most - log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degreewould have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network. © Springer-Verlag 2012.