961 resultados para Cache Replacement


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although cooperation generally increases the amount of resources available to a community of nodes, thus improving individual and collective performance, it also allows for the appearance of potential mistreatment problems through the exposition of one node’s resources to others. We study such concerns by considering a group of independent, rational, self-aware nodes that cooperate using on-line caching algorithms, where the exposed resource is the storage of each node. Motivated by content networking applications – including web caching, CDNs, and P2P – this paper extends our previous work on the off-line version of the problem, which was limited to object replication and was conducted under a game-theoretic framework. We identify and investigate two causes of mistreatment: (1) cache state interactions (due to the cooperative servicing of requests) and (2) the adoption of a common scheme for cache replacement/redirection/admission policies. Using analytic models, numerical solutions of these models, as well as simulation experiments, we show that online cooperation schemes using caching are fairly robust to mistreatment caused by state interactions. When this becomes possible, the interaction through the exchange of miss-streams has to be very intense, making it feasible for the mistreated nodes to detect and react to the exploitation. This robustness ceases to exist when nodes fetch and store objects in response to remote requests, i.e., when they operate as Level-2 caches (or proxies) for other nodes. Regarding mistreatment due to a common scheme, we show that this can easily take place when the “outlier” characteristics of some of the nodes get overlooked. This finding underscores the importance of allowing cooperative caching nodes the flexibility of choosing from a diverse set of schemes to fit the peculiarities of individual nodes. To that end, we outline an emulation-based framework for the development of mistreatment-resilient distributed selfish caching schemes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cache look up is an integral part of cooperative caching in ad hoc networks. In this paper, we discuss a cooperative caching architecture with a distributed cache look up protocol which relies on a virtual backbone for locating and accessing data within a cooperate cache. Our proposal consists of two phases: (i) formation of a virtual backbone and (ii) the cache look up phase. The nodes in a Connected Dominating Set (CDS) form the virtual backbone. The cache look up protocol makes use of the nodes in the virtual backbone for effective data dissemination and discovery. The idea in this scheme is to reduce the number of nodes involved in cache look up process, by constructing a CDS that contains a small number of nodes, still having full coverage of the network. We evaluated the effect of various parameter settings on the performance metrics such as message overhead, cache hit ratio and average query delay. Compared to the previous schemes the proposed scheme not only reduces message overhead, but also improves the cache hit ratio and reduces the average delay

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cooperative caching is an attractive solution for reducing bandwidth demands and network latency in mobile ad hoc networks. Deploying caches in mobile nodes can reduce the overall traffic considerably. Cache hits eliminate the need to contact the data source frequently, which avoids additional network overhead. In this paper we propose a data discovery and cache management policy for cooperative caching, which reduces the caching overhead and delay by reducing the number of control messages flooded in to the network. A cache discovery process based on location of neighboring nodes is developed for this. The cache replacement policy we propose aims at increasing the cache hit ratio. The simulation results gives a promising result based on the metrics of studies

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mobile users connected to wireless networks expect performance comparable to those on wired networks for interactive multimedia applications. Satisfying Quality of Service (QoS) requirements for such applications in wireless networks is a challenging problem due to limitations of low bandwidth, high error rate and frequent disconnections of wireless channels. In addition, wireless networks suffer from varying bandwidth. In this paper we investigate object prefetching during times of connectedness and bandwidth availability to enhance user perceived connectedness. This paper presents an access model that is suitable for multimedia access in wireless networks. Access modelling for the purpose of predicting future accesses in the context of speculative prefetching has received much attention in the literature. The model recognizes that a web page, instead of just a single file, is typically a compound of several files. When it comes to making prefetch decisions, most previous studies in speculative prefetching resort to simple heuristics, such as prefetching an item with access probabilities larger than a manually tuned threshold. This paper takes a different approach. Specifically, it models the performance of the prefetcher, taking into account access predictions and resource parameters, and develops a prefetch policy based on a theoretical analysis of the model. Since the analysis considers cache as one of the resource parameters, the resulting policy integrates prefetch and cache replacement decisions. The paper investigates the effect of prefetching on network load. In order to make effective use of available resources and maximize access improvement, it is beneficial to prefetch all items with access probabilities exceeding certain threshold.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To improve the accuracy of access prediction, a prefetcher for web browsing should recognize the fact that a web page is a compound. By this term we mean that a user request for a single web page may require the retrieval of several multimedia items. Our prediction algorithm builds an access graph that captures the dynamics of web navigation rather than merely attaching probabilities to hypertext structure. When it comes to making prefetch decisions, most previous studies in speculative prefetching resort to simple heuristics, such as prefetching an item with access probabilities larger than a manually tuned threshold. The paper takes a different approach. Specifically, it models the performance of the prefetcher and develops a prefetch policy based on a theoretical analysis of the model. In the analysis, we derive a formula for the expected improvement in access time when prefetch is performed in anticipation for a compound request. We then develop an algorithm that integrates prefetch and cache replacement decisions so as to maximize this improvement. We present experimental results to demonstrate the effectiveness of compound-based prefetching in low bandwidth networks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The inherent temporal locality in memory accesses is filtered out by the L1 cache. As a consequence, an L2 cache with LRU replacement incurs significantly higher misses than the optimal replacement policy (OPT). We propose to narrow this gap through a novel replacement strategy that mimics the replacement decisions of OPT. The L2 cache is logically divided into two components, a Shepherd Cache (SC) with a simple FIFO replacement and a Main Cache (MC) with an emulation of optimal replacement. The SC plays the dual role of caching lines and guiding the replacement decisions in MC. Our pro- posed organization can cover 40% of the gap between OPT and LRU for a 2MB cache resulting in 7% overall speedup. Comparison with the dynamic insertion policy, a victim buffer, a V-Way cache and an LRU based fully associative cache demonstrates that our scheme performs better than all these strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CMPs enable simultaneous execution of multiple applications on the same platforms that share cache resources. Diversity in the cache access patterns of these simultaneously executing applications can potentially trigger inter-application interference, leading to cache pollution. Whereas a large cache can ameliorate this problem, the issues of larger power consumption with increasing cache size, amplified at sub-100nm technologies, makes this solution prohibitive. In this paper in order to address the issues relating to power-aware performance of caches, we propose a caching structure that addresses the following: 1. Definition of application-specific cache partitions as an aggregation of caching units (molecules). The parameters of each molecule namely size, associativity and line size are chosen so that the power consumed by it and access time are optimal for the given technology. 2. Application-Specific resizing of cache partitions with variable and adaptive associativity per cache line, way size and variable line size. 3. A replacement policy that is transparent to the partition in terms of size, heterogeneity in associativity and line size. Through simulation studies we establish the superiority of molecular cache (caches built as aggregations of molecules) that offers a 29% power advantage over that of an equivalently performing traditional cache.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many networked applications, independent caching agents cooperate by servicing each other's miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of one's own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic "litmus" tests that are able to detect LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not "pure" LFU or LRU. We exemplify the value of our inference framework by considering example applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Task-based dataflow programming models and runtimes emerge as promising candidates for programming multicore and manycore architectures. These programming models analyze dynamically task dependencies at runtime and schedule independent tasks concurrently to the processing elements. In such models, cache locality, which is critical for performance, becomes more challenging in the presence of fine-grain tasks, and in architectures with many simple cores.

This paper presents a combined hardware-software approach to improve cache locality and offer better performance is terms of execution time and energy in the memory system. We propose the explicit bulk prefetcher (EBP) and epoch-based cache management (ECM) to help runtimes prefetch task data and guide the replacement decisions in caches. The runtimem software can use this hardware support to expose its internal knowledge about the tasks to the architecture and achieve more efficient task-based execution. Our combined scheme outperforms HW-only prefetchers and state-of-the-art replacement policies, improves performance by an average of 17%, generates on average 26% fewer L2 misses, and consumes on average 28% less energy in the components of the memory system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Centers for Disease Control Guidelines recommend replacement of peripheral intravenous (IV) catheters every 72 to 96 hours. Routine replacement is thought to reduce the risk of phlebitis and bacteraemia. Catheter insertion is an unpleasant experience for patients and replacement may be unnecessary if the catheter remains functional and there are no signs of inflammation. Costs associated with routine replacement may be considerable. Objectives To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer aided joint replacement surgery has become very popular during recent years and is being done in increasing numbers all over the world. The accuracy of the system depends to a major extent, on accurate registration and immobility of the tracker attachment devices to the bone. This study was designed to asses the forces needed to displace the tracker attachment devices in the bone simulators. Bone simulators were used to maintain the uniformity of the bone structure during the study. The fixation devices tested were 3mm diameter self drilling, self tapping threaded pin, 4mm diameter self tapping cortical threaded pin, 5mm diameter self tapping cancellous threaded pin and a triplanar fixation device ‘ortholock’ used with three 3mm pins. All the devices were tested for pull out, translational and rotational forces in unicortical and bicortical fixation modes. Also tested was the normal bang strength and forces generated by leaning on the devices. The forces required to produce translation increased with the increasing diameter of the pins. These were 105N, 185N, and 225N for the unicortical fixations and 130N, 200N, 225N for the bicortical fixations for 3mm, 4mm and 5mm diameter pins respectively. The forces required to pull out the pins were 1475N, 1650N, 2050N for the unicortical, 1020N, 3044N and 3042N for the bicortical fixated 3mm, 4mm and 5mm diameter pins. The ortholock translational and pull out strength was tested to 900N and 920N respectively and still it did not fail. Rotatory forces required to displace the tracker on pins was to the magnitude of 30N before failure. The ortholock device had rotational forces applied up to 135N and still did not fail. The manual leaning forces and the sudden bang forces generated were of the magnitude of 210N and 150N respectively. The strength of the fixation pins increases with increasing diameter from three to five mm for the translational forces. There is no significant difference in pull out forces of four mm and five mm diameter pins though it is more that the three mm diameter pins. This is because of the failure of material at that stage rather than the fixation device. The rotatory forces required to displace the tracker are very small and much less that that can be produced by the surgeon or assistants in single pins. Although the ortholock device was tested to 135N in rotation without failing, one has to be very careful not to put any forces during the operation on the tracker devices to ensure the accuracy of the procedure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the long term outcome of the flangeless, cemented all polyethylene Exeter cup at a mean of 14.6 years (range 10-17) after operation. Of the 263 hips in 243 patients, 122 hips are still in situ, 112 patients (119 hips) have died, eighteen hips were revised, and three patients (four hips) had moved abroad and were lost to follow-up (1.5%). Radiographs demonstrated two sockets had migrated and six more had radiolucent lines in all three zones. The Kaplan Meier survivorship at 15 years with endpoint revision for all causes is 89.9% (95% CI 84.6 to 95.2%) and for aseptic cup loosening or lysis 91.7% (CI 86.6 to 96.8%). In 210 hips with a diagnosis of primary osteoarthritis survivorship for all causes is 93.2% (95% CI 88.1 to 98.3%), and for aseptic cup loosening 95.0% (CI 90.3 to 99.7%). The cemented all polyethylene Exeter cup has an excellent long-term survivorship.