814 resultados para Peer-to-Peers Networks
Resumo:
An electricity demand reduction project based on comprehensive residential consumer engagement was established within an Australian community in 2008. By 2011, both the peak demand and grid supplied electricity consumption had decreased to below pre-intervention levels. This case study research explored the relationship developed between the utility, community and individual consumer from the residential customer perspective through qualitative research of 22 residential households. It is proposed that an energy utility can be highly successful at peak demand reduction by becoming a community member and a peer to residential consumers and developing the necessary trust, access, influence and partnership required to create the responsive environment to change. A peer-community approach could provide policymakers with a pathway for implementing pro-environmental behaviour for low carbon communities, as well as peak demand reduction, thereby addressing government emission targets while limiting the cost of living increases from infrastructure expenditure.
Resumo:
This paper considers the ongoing litigation against the peer to peer network Kazaa. Record companies and Hollywood studios have faced jurisdictional and legal problems in suing this network for copyright infringement. As Wired Magazine observes: ’The servers are in Denmark. The software is in Estonia. The domain is registered Down Under, the corporation on a tiny island in the South Pacific. The users - 60 million of them - are everywhere around the world.' In frustration, copyright owners have launched copyright actions against intermediaries - like Internet Service Providers such as Verizon. They have also embarked on filing suits of individual users of file-sharing programs. In addition, copyright owners have called for domestic and international law reform in respect of digital copyright. The Senate Committee on Government Affairs in the United States Congress has reviewed the controversial use of subpoenas in suits against users of file-sharing peer to peer networks. The United States has encouraged other countries to adopt provisions of the Digital Millennium Copyright Act 1998 (US) in bilateral and regional free trade agreements.
Resumo:
Peer to peer networks are being used extensively nowadays for file sharing, video on demand and live streaming. For IPTV, delay deadlines are more stringent compared to file sharing. Coolstreaming was the first P2P IPTV system. In this paper, we model New Coolstreaming (newer version of Coolstreaming) via a queueing network. We use two time scale decomposition of Markov chains to compute the stationary distribution of number of peers and the expected number of substreams in the overlay which are not being received at the required rate due to parent overloading. We also characterize the end-to-end delay encountered by a video packet received by a user and originated at the server. Three factors contribute towards the delay. The first factor is the mean shortest path length between any two overlay peers in terms of overlay hops of the partnership graph which is shown to be O (log n) where n is the number of peers in the overlay. The second factor is the mean number of routers between any two overlay neighbours which is seen to be at most O (log N-I) where N-I is the number of routers in the internet. Third factor is the mean delay at a router in the internet. We provide an approximation of this mean delay E W]. Thus, the mean end to end delay in New Coolstreaming is shown to be upper bounded by O (log E N]) (log N-I) E (W)] where E N] is the mean number of peers at a channel.
Resumo:
We present new, simple, efficient data structures for approximate reconciliation of set differences, a useful standalone primitive for peer-to-peer networks and a natural subroutine in methods for exact reconciliation. In the approximate reconciliation problem, peers A and B respectively have subsets of elements SA and SB of a large universe U. Peer A wishes to send a short message M to peer B with the goal that B should use M to determine as many elements in the set SB–SA as possible. To avoid the expense of round trip communication times, we focus on the situation where a single message M is sent. We motivate the performance tradeoffs between message size, accuracy and computation time for this problem with a straightforward approach using Bloom filters. We then introduce approximation reconciliation trees, a more computationally efficient solution that combines techniques from Patricia tries, Merkle trees, and Bloom filters. We present an analysis of approximation reconciliation trees and provide experimental results comparing the various methods proposed for approximate reconciliation.
Resumo:
Working memory neural networks are characterized which encode the invariant temporal order of sequential events. Inputs to the networks, called Sustained Temporal Order REcurrent (STORE) models, may be presented at widely differing speeds, durations, and interstimulus intervals. The STORE temporal order code is designed to enable all emergent groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its 2-D aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor --> ART 2 --> STORE Model --> ART 2 --> Outstar Network.
Resumo:
This presentation reports on the formal evaluation, through questionnaires, of a new Level 1 undergraduate course, for 130 student teachers, that uses blended learning. The course design seeks to radicalise the department’s approach to teaching, learning and assessment and use students as change agents. Its structure and content, model social constructivist approaches to learning. Building on the student’s experiences of and, reflections on, previous learning, promotes further learning through the support of “able others” (Vygotsky 1978), facilitating and nurturing a secure community of practice for students new to higher education. The course’s design incorporates individual, paired, small and large group activities and exploits online video, audio and text materials. Course units begin and end with face-to-face tutor-led activities. Online elements, including discussions and formative submissions, are tutor-mediated. Students work together face-to-face and online to read articles, write reflections, develop presentations, research and share experiences and resources. Summative joint assignments and peer assessments emphasise the value of collaboration and teamwork for academic, personal and professional development. Initial informal findings are positive, indicating that students have engaged readily with course content and structure, with few reporting difficulties accessing or using technology. Students have welcomed the opportunity to work together to tackle readings in a new genre, pilot presentation skills and receive and give constructive feedback to peers. Course tutors have indicated that depth and quality of study are evident, with regular online formative submissions enabling tutors to identify and engage directly with student’s needs, provide feedback and develop appropriately designed distance and face-to-face teaching materials. Pastoral tutors have indicated that students have reported non-engagement of peers, leading to the rapid application of academic or personal support. Outcomes of the formal evaluation will inform the development of Level 2 and 3 courses and influence the department’s use of blended learning.
Resumo:
The development of the Internet and in particular of social networks has supposedly given a new view to the different aspects that surround human behavior. It includes those associated with addictions, but specifically the ones that have to do with technologies. Following a correlational descriptive design we present the results of a study, which involved university students from Social and Legal Sciences as participants, about their addiction to the Internet and in particular to social networks. The sample was conformed of 373 participants from the cities of Granada, Sevilla, Málaga, and Córdoba. To gather the data a questionnaire that was design by Young was translated to Spanish. The main research objective was to determine if university students could be considered social network addicts. The most prominent result was that the participants don’t consider themselves to be addicted to the Internet or to social networks; in particular women reflected a major distance from the social networks. It’s important to know that the results differ from those found in the literature review, which opens the question, are the participants in a phase of denial towards the addiction?
Resumo:
Many modern networks are \emph{reconfigurable}, in the sense that the topology of the network can be changed by the nodes in the network. For example, peer-to-peer, wireless and ad-hoc networks are reconfigurable. More generally, many social networks, such as a company's organizational chart; infrastructure networks, such as an airline's transportation network; and biological networks, such as the human brain, are also reconfigurable. Modern reconfigurable networks have a complexity unprecedented in the history of engineering, resembling more a dynamic and evolving living animal rather than a structure of steel designed from a blueprint. Unfortunately, our mathematical and algorithmic tools have not yet developed enough to handle this complexity and fully exploit the flexibility of these networks. We believe that it is no longer possible to build networks that are scalable and never have node failures. Instead, these networks should be able to admit small, and maybe, periodic failures and still recover like skin heals from a cut. This process, where the network can recover itself by maintaining key invariants in response to attack by a powerful adversary is what we call \emph{self-healing}. Here, we present several fast and provably good distributed algorithms for self-healing in reconfigurable dynamic networks. Each of these algorithms have different properties, a different set of gaurantees and limitations. We also discuss future directions and theoretical questions we would like to answer. %in the final dissertation that this document is proposed to lead to.
Resumo:
By 2015, with the proliferation of wireless multimedia applications and services (e.g., mobile TV, video on demand, online video repositories, immersive video interaction, peer to peer video streaming, and interactive video gaming), and any-time anywhere communication, the number of smartphones and tablets will exceed 6.5 billion as the most common web access devices. Data volumes in wireless multimedia data-intensive applications and mobile web services are projected to increase by a factor of 10 every five years, associated with a 20 percent increase in energy consumption, 80 percent of which is multimedia traffic related. In turn, multimedia energy consumption is rising at 16 percent per year, doubling every six years. It is estimated that energy costs alone account for as much as half of the annual operating expenditure. This has prompted concerted efforts by major operators to drastically reduce carbon emissions by up to 50 percent over the next 10 years. Clearly, there is an urgent need for new disruptive paradigms of green media to bridge the gap between wireless technologies and multimedia applications.
Resumo:
Uma das áreas de investigação em Telecomunicações de interesse crescente prende-se com os futuros sistemas de comunicações móveis de 4a geração e além destes. Nos últimos anos tem sido desenvolvido o conceito de redes comunitárias, no qual os utilizadores se agregam de acordo com interesses comuns. Estes conceitos têm sido explorados de uma forma horizontal em diferentes camadas da comunicação, desde as redes comunitárias de comunicação (Seattle Wireless ou Personal Telco, p.ex.) até às redes de interesses peer-to-peer. No entanto, estas redes são usualmente vistas como redes de overlay, ou simplesmente redes de associação livre. Na prática, a noção de uma rede auto-organizada, completamente orientada ao serviço/comunidade, integralmente suportada em termos de arquitetura, não existe. Assim este trabalho apresenta uma realização original nesta área de criação de redes comunitárias, com uma arquitetura subjacente orientada a serviço, e que suporta integralmente múltiplas redes comunitárias no mesmo dispositivo, com todas as características de segurança, confiança e disponibilização de serviço necessárias neste tipo de cenários (um nó pode pertencer simultaneamente a mais do que uma rede comunitária). Devido à sua importância para os sistemas de redes comunitárias, foi dado particular atenção a aspetos de gestão de recursos e controlo de acessos. Ambos realizados de uma forma descentralizada e considerando mecanismos dotados de grande escalabilidade. Para isso, é apresentada uma linguagem de políticas que suporta a criação de comunidades virtuais. Esta linguagem não é apenas utilizada para o mapeamento da estrutura social dos membros da comunidade, como para, gerir dispositivos, recursos e serviços detidos pelos membros, de uma forma controlada e distribuída.
Resumo:
While the IEEE 802.15.4/Zigbee protocol stack is being considered as a promising technology for low-cost low-power Wireless Sensor Networks (WSNs), several issues in the standard specifications are still open. One of those ambiguous issues is how to build a synchronized multi-hop cluster-tree network, which is quite suitable for ensuring QoS support in WSNs. In fact, the current IEEE 802.15.4/Zigbee specifications restrict the synchronization in the beacon-enabled mode (by the generation of periodic beacon frames) to star-based networks, while it supports multi-hop networking using the peer-to-peer mesh topology, but with no synchronization. Even though both specifications mention the possible use of cluster-tree topologies, which combine multihop and synchronization features, the description on how to effectively construct such a network topology is missing. This paper tackles this problem, unveils the ambiguities regarding the use of the cluster-tree topology and proposes a synchronization mechanism based on Time Division Beacon Scheduling to construct cluster-tree WSNs. We also propose a methodology for an efficient duty cycle management in each router (cluster-head) of a cluster-tree WSN that ensures the fairest use of bandwidth resources. The feasibility of the proposal is clearly demonstrated through an experimental test bed based on our own implementation of the IEEE 802.15.4/Zigbee protocol.
Resumo:
The study of the morphology of tidal networks and their relation to salt marsh vegetation is currently an active area of research, and a number of theories have been developed which require validation using extensive observations. Conventional methods of measuring networks and associated vegetation can be cumbersome and subjective. Recent advances in remote sensing techniques mean that these can now often reduce measurement effort whilst at the same time increasing measurement scale. The status of remote sensing of tidal networks and their relation to vegetation is reviewed. The measurement of network planforms and their associated variables is possible to sufficient resolution using digital aerial photography and airborne scanning laser altimetry (LiDAR), with LiDAR also being able to measure channel depths. A multi-level knowledge-based technique is described to extract networks from LiDAR in a semi-automated fashion. This allows objective and detailed geomorphological information on networks to be obtained over large areas of the inter-tidal zone. It is illustrated using LIDAR data of the River Ems, Germany, the Venice lagoon, and Carnforth Marsh, Morecambe Bay, UK. Examples of geomorphological variables of networks extracted from LiDAR data are given. Associated marsh vegetation can be classified into its component species using airborne hyperspectral and satellite multispectral data. Other potential applications of remote sensing for network studies include determining spatial relationships between networks and vegetation, measuring marsh platform vegetation roughness, in-channel velocities and sediment processes, studying salt pans, and for marsh restoration schemes.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
We introduce a model for a pair of nonlinear evolving networks, defined over a common set of vertices, sub ject to edgewise competition. Each network may grow new edges spontaneously or through triad closure. Both networks inhibit the other’s growth and encourage the other’s demise. These nonlinear stochastic competition equations yield to a mean field analysis resulting in a nonlinear deterministic system. There may be multiple equilibria; and bifurcations of different types are shown to occur within a reduced parameter space. This situation models competitive peer-to-peer communication networks such as BlackBerry Messenger displacing SMS; or instant messaging displacing emails.
Resumo:
In Peer-to-Peer (P2P) networks, it is often desirable to assign node IDs which preserve locality relationships in the underlying topology. Node locality can be embedded into node IDs by utilizing a one dimensional mapping by a Hilbert space filling curve on a vector of network distances from each node to a subset of reference landmark nodes within the network. However this approach is fundamentally limited because while robustness and accuracy might be expected to improve with the number of landmarks, the effectiveness of 1 dimensional Hilbert Curve mapping falls for the curse of dimensionality. This work proposes an approach to solve this issue using Landmark Multidimensional Scaling (LMDS) to reduce a large set of landmarks to a smaller set of virtual landmarks. This smaller set of landmarks has been postulated to represent the intrinsic dimensionality of the network space and therefore a space filling curve applied to these virtual landmarks is expected to produce a better mapping of the node ID space. The proposed approach, the Virtual Landmarks Hilbert Curve (VLHC), is particularly suitable for decentralised systems like P2P networks. In the experimental simulations the effectiveness of the methods is measured by means of the locality preservation derived from node IDs in terms of latency to nearest neighbours. A variety of realistic network topologies are simulated and this work provides strong evidence to suggest that VLHC performs better than either Hilbert Curves or LMDS use independently of each other.