991 resultados para uncertainty-functions
Resumo:
Bank conflicts can severely reduce the bandwidth of an interleaved multibank memory and conflict misses increase the miss rate of a cache or a predictor. Both occurrences are manifestations of the same problem: Objects which should be mapped to different indices are accidentally mapped to the same index. Suitable chosen hash functions can avoid conflicts in each of these situations by mapping the most frequently occurring patterns conflict-free. A particularly interesting class of hash functions are the XOR-based hash functions, which compute each set index bit as the exclusive-or of a subset of the address bits. When implementing an XOR-based hash function, it is extremely important to understand what patterns are mapped conflict-free and how a hash function can be constructed to map the most frequently occurring patterns without conflicts. Hereto, this paper presents two ways to reason about hash functions: by their null space and by their column space. The null space helps to quickly determine whether a pattern is mapped conflict-free. The column space is more useful for other purposes, e. g., to reduce the fan-in of the XOR-gates without introducing conflicts or to evaluate interbank dispersion in skewed-associative caches. Examples illustrate how these ideas can be applied to construct conflict-free hash functions.
Resumo:
Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.
Resumo:
Randomising set index functions can reduce the number of conflict misses in data caches by spreading the cache blocks uniformly over all sets. Typically, the randomisation functions compute the exclusive ors of several address bits. Not all randomising set index functions perform equally well, which calls for the evaluation of many set index functions. This paper discusses and improves a technique that tackles this problem by predicting the miss rate incurred by a randomisation function, based on profiling information. A new way of looking at randomisation functions is used, namely the null space of the randomisation function. The members of the null space describe pairs of cache blocks that are mapped to the same set. This paper presents an analytical model of the error made by the technique and uses this to propose several optimisations to the technique. The technique is then applied to generate a conflict-free randomisation function for the SPEC benchmarks. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Taxonomic studies of the past few years have shown that the Burkholderia cepacia complex, a heterogeneous group of B. cepacia-like organisms, consists of at least nine species. B. cepacia complex strains are ubiquitously distributed in nature and have been used for biocontrol, bioremediation, and plant growth promotion purposes. At the same time, B. cepacia complex strains have emerged as important opportunistic pathogens of humans, particularly those with cystic fibrosis. All B. cepacia complex species investigated thus far use quorum-sensing (QS) systems that rely on N-acylhomoserine lactone (AHL) signal molecules to express certain functions, including the production of extracellular proteases, swarming motility, biofilm formation, and pathogenicity, in a population-density-dependent manner. In this study we constructed a broad-host-range plasmid that allowed the heterologous expression of the Bacillus sp. strain 240B1 AiiA lactonase, which hydrolyzes the lactone ring of various AHL signal molecules, in all described B. cepacia complex species. We show that expression of AiiA abolished or greatly reduced the accumulation of AHL molecules in the culture supernatants of all tested B. cepacia complex strains. Phenotypic characterization of wild-type and transgenic strains revealed that protease production, swarming motility, biofilm formation, and Caenorhabditis elegans killing efficiency was regulated by AHL in the large majority of strains investigated.
Resumo:
Building on a proof by D. Handelman of a generalisation of an example due to L. Fuchs, we show that the space of real-valued polynomials on a non-empty set X of reals has the Riesz Interpolation Property if and only if X is bounded.
Resumo:
The evolution of the amplitude of two nonlinearly interacting waves is considered, via a set of coupled nonlinear Schrödinger-type equations. The dynamical profile is determined by the wave dispersion laws (i.e. the group velocities and the group velocity dispersion terms) and the nonlinearity and coupling coefficients, on which no assumption is made. A generalized dispersion relation is obtained, relating the frequency and wave-number of a small perturbation around a coupled monochromatic (Stokes') wave solution. Explicitly stability criteria are obtained. The analysis reveals a number of possibilities. Two (individually) stable systems may be destabilized due to coupling. Unstable systems may, when coupled, present an enhanced instability growth rate, for an extended wave number range of values. Distinct unstable wavenumber windows may arise simultaneously.
Resumo:
A benefit function transfer obtains estimates of willingness-to-pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the benefit transfer model. A more expensive alternative to estimate WTP is to analyze only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian model averaging (BMA) techniques can be used to optimally combine information from all models. The Bayesian algorithm searches for the set of sites that can form the basis for estimating a benefit function and reveals whether such information can be transferred to new sites for which only a small data set is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is 'poolable'. © 2008 Elsevier Inc. All rights reserved.
Resumo:
An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.