53 resultados para peer-to-peer (P2P) computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Capillary-based systems for measuring the input impedance of musical wind instruments were first developed in the mid-20th century and remain in widespread use today. In this paper, the basic principles and assumptions underpinning the design of such systems are examined. Inexpensive modifications to a capillary-based impedance measurement set-up made possible due to advances in computing and data acquisition technology are discussed. The modified set-up is able to measure both impedance magnitude and impedance phase even though it only contains one microphone. In addition, a method of calibration is described that results in a significant improvement in accuracy when measuring high impedance objects on the modified capillary-based system. The method involves carrying out calibration measurements on two different objects whose impedances are well-known theoretically. The benefits of performing two calibration measurements (as opposed to the one calibration measurement that has been traditionally used) are demonstrated experimentally through input impedance measurements on two test objects and a Boosey and Hawkes oboe. © S. Hirzel Verlag · EAA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is a technological advancementthat provide resources through internet on pay-as-you-go basis.Cloud computing uses virtualisation technology to enhance theefficiency and effectiveness of its advantages. Virtualisation isthe key to consolidate the computing resources to run multiple instances on each hardware, increasing the utilization rate of every resource, thus reduces the number of resources needed to buy, rack, power, cool, and manage. Cloud computing has very appealing features, however, lots of enterprises and users are still reluctant to move into cloud due to serious security concerns related to virtualisation layer. Thus, it is foremost important to secure the virtual environment.In this paper, we present an elastic framework to secure virtualised environment for trusted cloud computing called Server Virtualisation Security System (SVSS). SVSS provide security solutions located on hyper visor for Virtual Machines by deploying malicious activity detection techniques, network traffic analysis techniques, and system resource utilization analysis techniques.SVSS consists of four modules: Anti-Virus Control Module,Traffic Behavior Monitoring Module, Malicious Activity Detection Module and Virtualisation Security Management Module.A SVSS prototype has been deployed to validate its feasibility,efficiency and accuracy on Xen virtualised environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Large-scale biological jobs on high-performance computing systems require manual intervention if one or more computing cores on which they execute fail. This places not only a cost on the maintenance of the job, but also a cost on the time taken for reinstating the job and the risk of losing data and execution accomplished by the job before it failed. Approaches which can proactively detect computing core failures and take action to relocate the computing core's job onto reliable cores can make a significant step towards automating fault tolerance. Method: This paper describes an experimental investigation into the use of multi-agent approaches for fault tolerance. Two approaches are studied, the first at the job level and the second at the core level. The approaches are investigated for single core failure scenarios that can occur in the execution of parallel reduction algorithms on computer clusters. A third approach is proposed that incorporates multi-agent technology both at the job and core level. Experiments are pursued in the context of genome searching, a popular computational biology application. Result: The key conclusion is that the approaches proposed are feasible for automating fault tolerance in high-performance computing systems with minimal human intervention. In a typical experiment in which the fault tolerance is studied, centralised and decentralised checkpointing approaches on an average add 90% to the actual time for executing the job. On the other hand, in the same experiment the multi-agent approaches add only 10% to the overall execution time

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exascale computation is the next target of high performance computing. In the push to create exascale computing platforms, simply increasing the number of hardware devices is not an acceptable option given the limitations of power consumption, heat dissipation, and programming models which are designed for current hardware platforms. Instead, new hardware technologies, coupled with improved programming abstractions and more autonomous runtime systems, are required to achieve this goal. This position paper presents the design of a new runtime for a new heterogeneous hardware platform being developed to explore energy efficient, high performance computing. By combining a number of different technologies, this framework will both simplify the programming of current and future HPC applications, as well as automating the scheduling of data and computation across this new hardware platform. In particular, this work explores the use of FPGAs to achieve both the power and performance goals of exascale, as well as utilising the runtime to automatically effect dynamic configuration and reconfiguration of these platforms. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smartphones have undergone a remarkable evolution over the last few years, from simple calling devices to full fledged computing devices where multiple services and applications run concurrently. Unfortunately, battery capacity increases at much slower pace, resulting as a main bottleneck for Internet connected smartphones. Several software-based techniques have been proposed in the literature for improving the battery life. Most common techniques include data compression, packet aggregation or batch scheduling, offloading partial computations to cloud, switching OFF interfaces (e.g., WiFi or 3G/4G) periodically for short intervals etc. However, there has been no focus on eliminating the energy waste of background applications that extensively utilize smartphone resources such as CPU, memory, GPS, WiFi, 3G/4G data connection etc. In this paper, we propose an Application State Proxy (ASP) that suppresses/stops the applications on smartphones and maintains their presence on any other network device. The applications are resumed/restarted on smartphones only in case of any event, such as a new message arrival. In this paper, we present the key requirements for the ASP service and different possible architectural designs. In short, the ASP concept can significantly improve the battery life of smartphones, by reducing to maximum extent the usage of its resources due to background applications.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Mutual variation of the received signal which occurs as a consequence of the channel reciprocity property has recently been proposed as a viable method for secret key generation. However, this cannot be strictly maintained in practice as the property is applicable only in the absence of interference. To ensure the propagation defined key remains secret, one requirement is that there remain high degrees of uncertainty between the legitimate users channel response and that of any eavesdropper's. In this paper, we investigate whether such de-correlation occurs for an indoor point-to-point link at 2.45 GHz. This is achieved by computing the localized correlation coefficient between the simultaneous channel response measured by the legitimate users and that of multiple distributed eavesdroppers for static and dynamic scenarios.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We consider the problem of resource selection in clustered Peer-to-Peer Information Retrieval (P2P IR) networks with cooperative peers. The clustered P2P IR framework presents a significant departure from general P2P IR architectures by employing clustering to ensure content coherence between resources at the resource selection layer, without disturbing document allocation. We propose that such a property could be leveraged in resource selection by adapting well-studied and popular inverted lists for centralized document retrieval. Accordingly, we propose the Inverted PeerCluster Index (IPI), an approach that adapts the inverted lists, in a straightforward manner, for resource selection in clustered P2P IR. IPI also encompasses a strikingly simple peer-specific scoring mechanism that exploits the said index for resource selection. Through an extensive empirical analysis on P2P IR testbeds, we establish that IPI competes well with the sophisticated state-of-the-art methods in virtually every parameter of interest for the resource selection task, in the context of clustered P2P IR.