325 resultados para Cache


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The stashR package (a Set of Tools for Administering SHared Repositories) for R implements a simple key-value style database where character string keys are associated with data values. The key-value databases can be either stored locally on the user's computer or accessed remotely via the Internet. Methods specific to the stashR package allow users to share data repositories or access previously created remote data repositories. In particular, methods are available for the S4 classes localDB and remoteDB to insert, retrieve, or delete data from the database as well as to synchronize local copies of the data to the remote version of the database. Users efficiently access information from a remote database by retrieving only the data files indexed by user-specified keys and caching this data in a local copy of the remote database. The local and remote counterparts of the stashR package offer the potential to enhance reproducible research by allowing users of Sweave to cache their R computations for a research paper in a localDB database. This database can then be stored on the Internet as a remoteDB database. When readers of the research paper wish to reproduce the computations involved in creating a specific figure or calculating a specific numeric value, they can access the remoteDB database and obtain the R objects involved in the computation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the performance gap between microprocessors and memory continues to increase, main memory accesses result in long latencies which become a factor limiting system performance. Previous studies show that main memory access streams contain significant localities and SDRAM devices provide parallelism through multiple banks and channels. These locality and parallelism have not been exploited thoroughly by conventional memory controllers. In this thesis, SDRAM address mapping techniques and memory access reordering mechanisms are studied and applied to memory controller design with the goal of reducing observed main memory access latency. The proposed bit-reversal address mapping attempts to distribute main memory accesses evenly in the SDRAM address space to enable bank parallelism. As memory accesses to unique banks are interleaved, the access latencies are partially hidden and therefore reduced. With the consideration of cache conflict misses, bit-reversal address mapping is able to direct potential row conflicts to different banks, further improving the performance. The proposed burst scheduling is a novel access reordering mechanism, which creates bursts by clustering accesses directed to the same rows of the same banks. Subjected to a threshold, reads are allowed to preempt writes and qualified writes are piggybacked at the end of the bursts. A sophisticated access scheduler selects accesses based on priorities and interleaves accesses to maximize the SDRAM data bus utilization. Consequentially burst scheduling reduces row conflict rate, increasing and exploiting the available row locality. Using a revised SimpleScalar and M5 simulator, both techniques are evaluated and compared with existing academic and industrial solutions. With SPEC CPU2000 benchmarks, bit-reversal reduces the execution time by 14% on average over traditional page interleaving address mapping. Burst scheduling also achieves a 15% reduction in execution time over conventional bank in order scheduling. Working constructively together, bit-reversal and burst scheduling successfully achieve a 19% speedup across simulated benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The article focuses on the current situation of Spanish case law on ISP liability. It starts by presenting the more salient peculiarities of the Spanish transposition of the safe harbours laid down in the E-Commerce Directive. These peculiarities relate to the knowledge requirement of the hosting safe harbour, and to the safe harbour for information location tools. The article then provides an overview of the cases decided so far with regard to each of the safe harbours. Very few cases have dealt with the mere conduit and the caching safe harbours, though the latter was discussed in an interesting case involving Google’s cache. Most cases relate to hosting and linking safe harbours. With regard to hosting, the article focuses particularly on the two judgments handed down by the Supreme Court that hold an open interpretation of actual knowledge, an issue where courts had so far been split. Cases involving the linking safe harbour have mainly dealt with websites offering P2P download links. Accordingly, the article explores the legal actions brought against these sites, which for the moment have been unsuccessful. The new legislative initiative to fight against digital piracy – the Sustainable Economy Bill – is also analyzed. After the conclusion, the article provides an Annex listing the cases that have dealt with ISP liability in Spain since the safe harbours scheme was transposed into Spanish law.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On 3 April 2012, the Spanish Supreme Court issued a major ruling in favour of the Google search engine, including its ‘cache copy’ service: Sentencia n.172/2012, of 3 April 2012, Supreme Court, Civil Chamber.* The importance of this ruling lies not so much in the circumstances of the case (the Supreme Court was clearly disgusted by the claimant’s ‘maximalist’ petitum to shut down the whole operation of the search engine), but rather on the court going beyond the text of the Copyright Act into the general principles of the law and case law, and especially on the reading of the three-step test (in Art. 40bis TRLPI) in a positive sense so as to include all these principles. After accepting that none of the limitations listed in the Spanish Copyright statute (TRLPI) exempted the unauthorized use of fragments of the contents of a personal website through the Google search engine and cache copy service, the Supreme Court concluded against infringement, based on the grounds that the three-step test (in Art. 40bis TRLPI) is to be read not only in a negative manner but also in a positive sense so as to take into account that intellectual property – as any other kind of property – is limited in nature and must endure any ius usus inocui (harmless uses by third parties) and must abide to the general principles of the law, such as good faith and prohibition of an abusive exercise of rights (Art. 7 Spanish Civil Code).The ruling is a major success in favour of a flexible interpretation and application of the copyright statutes, especially in the scenarios raised by new technologies and market agents, and in favour of using the three-step test as a key tool to allow for it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La collecte de l’eau est réalisée avec succès depuis des millénaires dans les régions du monde entier – certaines interventions récentes ont également eu un impact local significatif. Pourtant, le potentiel de cette technique reste largement inconnu, non reconnu et non apprécié. Il est temps de transposer à plus grande échelle les « bonnes pratiques » de collecte de l’eau, celles qui ont survécu au temps ou qui ont émergé des nouvelles expériences, après des décennies de focalisation presque exclusive sur la maîtrise des flux d’eau douce dans les rivières et les lacs grâce à des investissements dans des infrastructures d’irrigation.La collecte de l’eau offre des opportunités sous-exploitées pour les systèmes principalement d’exploitation pluviale des zones arides dans le monde en développement. Celle-ci fonctionne mieux précisément dans les zones où la pauvreté rurale est la pire. Quand elle est bien réalisée, son impact est à la fois de réduire la faim et de lutter contre la pauvreté, tout en améliorant la résilience de l’environnement. Ces connaissances sur les technologies de collecte d’eau et sur les milieux dans lesquels elles ont tendance à donner de meilleurs résultats, représentent une véritable richesse cachée. Pour la première fois, ces connaissances sont traitées, rassemblées et rendues disponibles à travers un tel outil organisé, illustré et instructif reliant les technologies aux réseaux des connaissances, outil qui servira aux utilisateurs présumés de ces directives pratiques pour mieux comprendre et mettre en oeuvre leurs choix

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An overview is given of the lessons learned from the introduction of multi-threading using OpenMP in tmLQCD. In particular, programming style, performance measurements, cache misses, scaling, thread distribution for hybrid codes, race conditions, the overlapping of communication and computation and the measurement and reduction of certain overheads are discussed. Performance measurements and sampling profiles are given for different implementations of the hopping matrix computational kernel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information-centric networking (ICN) is a new communication paradigm that aims at increasing security and efficiency of content delivery in communication networks. In recent years, many research efforts in ICN have focused on caching strategies to reduce traffic and increase overall performance by decreasing download times. Since caches need to operate at line speed, they have only a limited size and content can only be stored for a short time. However, if content needs to be available for a longer time, e.g., for delay-tolerant networking or to provide high content availability similar to content delivery networks (CDNs), persistent caching is required. We base our work on the Content-Centric Networking (CCN) architecture and investigate persistent caching by extending the current repository implementation in CCNx. We show by extensive evaluations in a YouTube and webserver traffic scenario that repositories can be efficiently used to increase content availability by significantly increasing cache hit rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information-centric networking (ICN) is a new communication paradigm that aims at increasing security and efficiency of content delivery in communication networks. In recent years, many research efforts in ICN have focused on caching strategies to reduce traffic and increase overall performance by decreasing download times. Since caches need to operate at line-speed, they have only a limited size and content can only be stored for a short time. However, if content needs to be available for a longer time, e.g., for delay-tolerant networking or to provide high content availability similar to content delivery networks (CDNs), persistent caching is required. We base our work on the Content-Centric Networking (CCN) architecture and investigate persistent caching by extending the current repository implementation in CCNx. We show by extensive evaluations in a YouTube and webserver traffic scenario that repositories can be efficiently used to increase content availability by significantly increasing the cache hit rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polyvariant specialization allows generating múltiple versions of a procedure, which can then be separately optimized for different uses. Since allowing a high degree of polyvariance often results in more optimized code, polyvariant specializers, such as most partial evaluators, can genérate a large number of versions. This can produce unnecessarily large residual programs. Also, large programs can be slower due to cache miss effects. A possible solution to this problem is to introduce a minimization step which identifies sets of equivalent versions, and replace all occurrences of such versions by a single one. In this work we present a unifying view of the problem of superfluous polyvariance. It includes both partial deduction and abstract múltiple specialization. As regards partial deduction, we extend existing approaches in several ways. First, previous work has dealt with puré logic programs and a very limited class of builtins. Herein we propose an extensión to traditional characteristic trees which can be used in the presence of calis to external predicates. This includes all builtins, librarles, other user modules, etc. Second, we propose the possibility of collapsing versions which are not strictly equivalent. This allows trading time for space and can be useful in the context of embedded and pervasive systems. This is done by residualizing certain computations for external predicates which would otherwise be performed at specialization time. Third, we provide an experimental evaluation of the potential gains achievable using minimization which leads to interesting conclusions.