56 resultados para cache coherence protocols


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software transactional memory (STM) is a promising programming paradigm for shared memory multithreaded programs as an alternative to traditional lock based synchronization. However adoption of STM in mainstream software has been quite low due to its considerable overheads and its poor cache/memory performance. In this paper, we perform a detailed study of the cache behavior of STM applications and quantify the impact of different STM factors on the cache misses experienced by the applications. Based on our analysis, we propose a compiler driven Lock-Data Colocation (LDC), targeted at reducing the cache overheads on STM. We show that LDC is effective in improving the cache behavior of STM applications by reducing the dcache miss latency and improving execution time performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we propose a new organization for the last level shared cache of a rnulticore system. Our design is based on the observation that the Next-Use distance, measured in terms of intervening misses between the eviction of a line and its next use, for lines brought in by a given delinquent PC falls within a predictable range of values. We exploit this correlation to improve the performance of shared caches in multi-core architectures by proposing the NUcache organization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multisensor recordings are becoming commonplace. When studying functional connectivity between different brain areas using such recordings, one defines regions of interest, and each region of interest is often characterized by a set (block) of time series. Presently, for two such regions, the interdependence is typically computed by estimating the ordinary coherence for each pair of individual time series and then summing or averaging the results over all such pairs of channels (one from block 1 and other from block 2). The aim of this paper is to generalize the concept of coherence so that it can be computed for two blocks of non-overlapping time series. This quantity, called block coherence, is first shown mathematically to have properties similar to that of ordinary coherence, and then applied to analyze local field potential recordings from a monkey performing a visuomotor task. It is found that an increase in block coherence between the channels from V4 region and the channels from prefrontal region in beta band leads to a decrease in response time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let D denote the open unit disk in C centered at 0. Let H-R(infinity) denote the set of all bounded and holomorphic functions defined in D that also satisfy f(z) = <(f <(z)over bar>)over bar> for all z is an element of D. It is shown that H-R(infinity) is a coherent ring.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several replacement policies for web caches have been proposed and studied extensively in the literature. Different replacement policies perform better in terms of (i) the number of objects found in the cache (cache hit), (ii) the network traffic avoided by fetching the referenced object from the cache, or (iii) the savings in response time. In this paper, we propose a simple and efficient replacement policy (hereafter known as SE) which improves all three performance measures. Trace-driven simulations were done to evaluate the performance of SE. We compare SE with two widely used and efficient replacement policies, namely Least Recently Used (LRU) and Least Unified Value (LUV) algorithms. Our results show that SE performs at least as well as, if not better than, both these replacement policies. Unlike various other replacement policies proposed in literature, our SE policy does not require parameter tuning or a-priori trace analysis and has an efficient and simple implementation that can be incorporated in any existing proxy server or web server with ease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences critical system design objectives like area, power and performance. Hence the embedded system designer performs a complete memory architecture exploration to custom design a memory architecture for a given set of applications. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of exhaustive-search based memory exploration at the outer level and a two step based integrated data layout for SPRAM-Cache based architectures at the inner level. We present a two step integrated approach for data layout for SPRAM-Cache based hybrid architectures with the first step as data-partitioning that partitions data between SPRAM and Cache, and the second step is the cache conscious data layout. We formulate the cache-conscious data layout as a graph partitioning problem and show that our approach gives up to 34% improvement over an existing approach and also optimizes the off-chip memory address space. We experimented our approach with 3 embedded multimedia applications and our approach explores several hundred memory configurations for each application, yielding several optimal design points in a few hours of computation on a standard desktop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diatoms have become important organisms for monitoring freshwaters and their value has been recognised in Europe, American and African continents. If India is to include diatoms in the current suite of bioindicators, then thorough testing of diatom-based techniques is required. This paper provides guidance on methods through all stages of diatom collection from different habitats from streams and lakes, preparation and examination for the purposes of water quality assessment that can be adapted to most aquatic ecosystems in India.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cooperative relay communication in a fading channel environment under the orthogonal amplify-and-forward (OAF), non-orthogonal and orthogonal selection decode-and-forward (NSDF and OSDF) protocols is considered here. The diversity-multiplexing gain tradeoff (DMT) of the three protocols is determined and DMT-optimal distributed space-time code constructions are provided. The codes constructed are sphere decodable and in some instances incur minimum possible delay. Included in our results is the perhaps surprising finding that the OAF and NAF protocols have identical DMT when the time durations of the broadcast and cooperative phases are optimally chosen to suit the respective protocol. Two variants of the NSDF protocol are considered: fixed-NSDF and variable-NSDF protocol. In the variable-NSDF protocol, the fraction of time occupied by the broadcast phase is allowed to vary with multiplexing gain. In the two-relay case, the variable-NSDF protocol is shown to improve on the DMT of the best previously-known static protocol for higher values of multiplexing gain. Our results also establish that the fixed-NSDF protocol has a better DMT than the NAF protocol for any number of relays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inherent temporal locality in memory accesses is filtered out by the L1 cache. As a consequence, an L2 cache with LRU replacement incurs significantly higher misses than the optimal replacement policy (OPT). We propose to narrow this gap through a novel replacement strategy that mimics the replacement decisions of OPT. The L2 cache is logically divided into two components, a Shepherd Cache (SC) with a simple FIFO replacement and a Main Cache (MC) with an emulation of optimal replacement. The SC plays the dual role of caching lines and guiding the replacement decisions in MC. Our pro- posed organization can cover 40% of the gap between OPT and LRU for a 2MB cache resulting in 7% overall speedup. Comparison with the dynamic insertion policy, a victim buffer, a V-Way cache and an LRU based fully associative cache demonstrates that our scheme performs better than all these strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Packet forwarding is a memory-intensive application requiring multiple accesses through a trie structure. The efficiency of a cache for this application critically depends on the placement function to reduce conflict misses. Traditional placement functions use a one-level mapping that naively partitions trie-nodes into cache sets. However, as a significant percentage of trie nodes are not useful, these schemes suffer from a non-uniform distribution of useful nodes to sets. This in turn results in increased conflict misses. Newer organizations such as variable associativity caches achieve flexibility in placement at the expense of increased hit-latency. This makes them unsuitable for L1 caches.We propose a novel two-level mapping framework that retains the hit-latency of one-level mapping yet incurs fewer conflict misses. This is achieved by introducing a secondlevel mapping which reorganizes the nodes in the naive initial partitions into refined partitions with near-uniform distribution of nodes. Further as this remapping is accomplished by simply adapting the index bits to a given routing table the hit-latency is not affected. We propose three new schemes which result in up to 16% reduction in the number of misses and 13% speedup in memory access time. In comparison, an XOR-based placement scheme known to perform extremely well for general purpose architectures, can obtain up to 2% speedup in memory access time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The novel three-component chiral derivatization protocols have been derived for (1)H and (19)F NMR spectroscopic discrimination of a series of chiral hydroxy acids by their coordination and self-assembly with optically active a-methylbenzylamine and 2-formylphenylboronic acid. In addition, the optically pure (S)-mandelic acid in combination with 2-formylphenylboronic acid permits visualization of enantiomers of primary amines. These protocols have been demonstrated on enantiodiscrimination of chiral amines and hydroxy acids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction. (c) 2012 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective sharing of the last level cache has a significant influence on the overall performance of a multicore system. We observe that existing solutions control cache occupancy at a coarser granularity, do not scale well to large core counts and in some cases lack the flexibility to support a variety of performance goals. In this paper, we propose Probabilistic Shared Cache Management (PriSM), a framework to manage the cache occupancy of different cores at cache block granularity by controlling their eviction probabilities. The proposed framework requires only simple hardware changes to implement, can scale to larger core count and is flexible enough to support a variety of performance goals. We demonstrate the flexibility of PriSM, by computing the eviction probabilities needed to achieve goals like hit-maximization, fairness and QOS. PriSM-HitMax improves performance by 18.7% over LRU and 11.8% over previously proposed schemes in a sixteen core machine. PriSM-Fairness improves fairness over existing solutions by 23.3% along with a performance improvement of 19.0%. PriSM-QOS successfully achieves the desired QOS targets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the reconstruction problem in frequency-domain optical-coherence tomography (FDOCT) from under-sampled measurements within the framework of compressed sensing (CS). Specifically, we propose optimal sparsifying bases for accurate reconstruction by analyzing the backscattered signal model. Although one might expect Fourier bases to be optimal for the FDOCT reconstruction problem, it turns out that the optimal sparsifying bases are windowed cosine functions where the window is the magnitude spectrum of the laser source. Further, the windowed cosine bases can be phase locked, which allows one to obtain higher accuracy in reconstruction. We present experimental validations on real data. The findings reported in this Letter are useful for optimal dictionary design within the framework of CS-FDOCT. (C) 2012 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.