158 resultados para Summed probability functions
Resumo:
Randomising set index functions can reduce the number of conflict misses in data caches by spreading the cache blocks uniformly over all sets. Typically, the randomisation functions compute the exclusive ors of several address bits. Not all randomising set index functions perform equally well, which calls for the evaluation of many set index functions. This paper discusses and improves a technique that tackles this problem by predicting the miss rate incurred by a randomisation function, based on profiling information. A new way of looking at randomisation functions is used, namely the null space of the randomisation function. The members of the null space describe pairs of cache blocks that are mapped to the same set. This paper presents an analytical model of the error made by the technique and uses this to propose several optimisations to the technique. The technique is then applied to generate a conflict-free randomisation function for the SPEC benchmarks. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Taxonomic studies of the past few years have shown that the Burkholderia cepacia complex, a heterogeneous group of B. cepacia-like organisms, consists of at least nine species. B. cepacia complex strains are ubiquitously distributed in nature and have been used for biocontrol, bioremediation, and plant growth promotion purposes. At the same time, B. cepacia complex strains have emerged as important opportunistic pathogens of humans, particularly those with cystic fibrosis. All B. cepacia complex species investigated thus far use quorum-sensing (QS) systems that rely on N-acylhomoserine lactone (AHL) signal molecules to express certain functions, including the production of extracellular proteases, swarming motility, biofilm formation, and pathogenicity, in a population-density-dependent manner. In this study we constructed a broad-host-range plasmid that allowed the heterologous expression of the Bacillus sp. strain 240B1 AiiA lactonase, which hydrolyzes the lactone ring of various AHL signal molecules, in all described B. cepacia complex species. We show that expression of AiiA abolished or greatly reduced the accumulation of AHL molecules in the culture supernatants of all tested B. cepacia complex strains. Phenotypic characterization of wild-type and transgenic strains revealed that protease production, swarming motility, biofilm formation, and Caenorhabditis elegans killing efficiency was regulated by AHL in the large majority of strains investigated.
Resumo:
Fusion process is known to be the initial step of viral infection and hence targeting the entry process is a promising strategy to design antiviral therapy. The self-inhibitory peptides derived from the enveloped (E) proteins function to inhibit the proteinprotein interactions in the membrane fusion step mediated by the viral E protein. Thus, they have the potential to be developed into effective antiviral therapy. Herein, we have developed a Monte Carlo-based computational method with the aim to identify and optimize potential peptide hits from the E proteins. The stability of the peptides, which indicates their potential to bind in situ to the E proteins, was evaluated by two different scoring functions, dipolar distance-scaled, finite, ideal-gas reference state and residue-specific all-atom probability discriminatory function. The method was applied to a-helical Class I HIV-1 gp41, beta-sheet Class II Dengue virus (DENV) type 2 E proteins, as well as Class III Herpes Simplex virus-1 (HSV-1) glycoprotein, a E protein with a mixture of a-helix and beta-sheet structural fold. The peptide hits identified are in line with the druggable regions where the self-inhibitory peptide inhibitors for the three classes of viral fusion proteins were derived. Several novel peptides were identified from either the hydrophobic regions or the functionally important regions on Class II DENV-2 E protein and Class III HSV-1 gB. They have potential to disrupt the proteinprotein interaction in the fusion process and may serve as starting points for the development of novel inhibitors for viral E proteins.
Resumo:
A new model to explain animal spacing, based on a trade-off between foraging efficiency and predation risk, is derived from biological principles. The model is able to explain not only the general tendency for animal groups to form, but some of the attributes of real groups. These include the independence of mean animal spacing from group population, the observed variation of animal spacing with resource availability and also with the probability of predation, and the decline in group stability with group size. The appearance of "neutral zones" within which animals are not motivated to adjust their relative positions is also explained. The model assumes that animals try to minimize a cost potential combining the loss of intake rate due to foraging interference and the risk from exposure to predators. The cost potential describes a hypothetical field giving rise to apparent attractive and repulsive forces between animals. Biologically based functions are given for the decline in interference cost and increase in the cost of predation risk with increasing animal separation. Predation risk is calculated from the probabilities of predator attack and predator detection as they vary with distance. Using example functions for these probabilities and foraging interference, we calculate the minimum cost potential for regular lattice arrangements of animals before generalizing to finite-sized groups and random arrangements of animals, showing optimal geometries in each case and describing how potentials vary with animal spacing. (C) 1999 Academic Press.</p>
Resumo:
Building on a proof by D. Handelman of a generalisation of an example due to L. Fuchs, we show that the space of real-valued polynomials on a non-empty set X of reals has the Riesz Interpolation Property if and only if X is bounded.
Resumo:
The evolution of the amplitude of two nonlinearly interacting waves is considered, via a set of coupled nonlinear Schrödinger-type equations. The dynamical profile is determined by the wave dispersion laws (i.e. the group velocities and the group velocity dispersion terms) and the nonlinearity and coupling coefficients, on which no assumption is made. A generalized dispersion relation is obtained, relating the frequency and wave-number of a small perturbation around a coupled monochromatic (Stokes') wave solution. Explicitly stability criteria are obtained. The analysis reveals a number of possibilities. Two (individually) stable systems may be destabilized due to coupling. Unstable systems may, when coupled, present an enhanced instability growth rate, for an extended wave number range of values. Distinct unstable wavenumber windows may arise simultaneously.
Resumo:
A benefit function transfer obtains estimates of willingness-to-pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the benefit transfer model. A more expensive alternative to estimate WTP is to analyze only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian model averaging (BMA) techniques can be used to optimally combine information from all models. The Bayesian algorithm searches for the set of sites that can form the basis for estimating a benefit function and reveals whether such information can be transferred to new sites for which only a small data set is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is 'poolable'. © 2008 Elsevier Inc. All rights reserved.
Resumo:
A nonparametric, small-sample-size test for the homogeneity of two psychometric functions against the left- and right-shift alternatives has been developed. The test is designed to determine whether it is safe to amalgamate psychometric functions obtained in different experimental sessions. The sum of the lower and upper p-values of the exact (conditional) Fisher test for several 2 × 2 contingency tables (one for each point of the psychometric function) is employed as the test statistic. The probability distribution of the statistic under the null (homogeneity) hypothesis is evaluated to obtain corresponding p-values. Power functions of the test have been computed by randomly generating samples from Weibull psychometric functions. The test is free of any assumptions about the shape of the psychometric function; it requires only that all observations are statistically independent. © 2011 Psychonomic Society, Inc.
Resumo:
Chronic lung diseases such as cystic fibrosis and emphysema are characterized by a protease burden, an infective process and a dominant proinflammatory profile. Secretory leucoprotease inhibitor (SLPI) is a prominent innate immune protein of the respiratory tract, possessing serine protease inhibitor activity, antibacterial activity, and anti-inflammatory/immunomodulatory activity. In the course of this review, the authors highlight the findings from a range of studies that illustrate the multiple functions of SLPI and its role in the resolution of the immune response.
Resumo:
The use of systolic arrays of 1-bit cells to implement a range of important signal processing functions is demonstrated. Two examples, a pipelined multiplier and a pipelined bit-slice transform circuit, are given. This approach has many important implications for silicon technology, and these are outlined.
Resumo:
Bit-level systolic-array structures for computing sums of products are studied in detail. It is shown that these can be subdivided into two classes and that within each class architectures can be described in terms of a set of constraint equations. It is further demonstrated that high-performance system-level functions with attractive VLSI properties can be constructed by matching data-flow geometries in bit-level and word-level architectures.
Resumo:
In three studies we looked at two typical misconceptions of probability: the representativeness heuristic, and the equiprobability bias. The literature on statistics education predicts that some typical errors and biases (e.g., the equiprobability bias) increase with education, whereas others decrease. This is in contrast with reasoning theorists’ prediction who propose that education reduces misconceptions in general. They also predict that students with higher cognitive ability and higher need for cognition are less susceptible to biases. In Experiments 1 and 2 we found that the equiprobability bias increased with statistics education, and it was negatively correlated with students’ cognitive abilities. The representativeness heuristic was mostly unaffected by education, and it was also unrelated to cognitive abilities. In Experiment 3 we demonstrated through an instruction manipulation (by asking participants to think logically vs. rely on their intuitions) that the reason for these differences was that these biases originated in different cognitive processes.
Dual-processes in learning and judgment:Evidence from the multiple cue probability learning paradigm
Resumo:
Multiple cue probability learning (MCPL) involves learning to predict a criterion based on a set of novel cues when feedback is provided in response to each judgment made. But to what extent does MCPL require controlled attention and explicit hypothesis testing? The results of two experiments show that this depends on cue polarity. Learning about cues that predict positively is aided by automatic cognitive processes, whereas learning about cues that predict negatively is especially demanding on controlled attention and hypothesis testing processes. In the studies reported here, negative, but not positive cue learning related to individual differences in working memory capacity both on measures of overall judgment performance and modelling of the implicit learning process. However, the introduction of a novel method to monitor participants' explicit beliefs about a set of cues on a trial-by-trial basis revealed that participants were engaged in explicit hypothesis testing about positive and negative cues, and explicit beliefs about both types of cues were linked to working memory capacity. Taken together, our results indicate that while people are engaged in explicit hypothesis testing during cue learning, explicit beliefs are applied to judgment only when cues are negative. © 2012 Elsevier Inc.