916 resultados para Approximate Sum Rule


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Formal conceptions of the rule of law are popular among contemporary legal philosophers. Nonetheless, the coherence of accounts of the rule of law committed to these conceptions is sometimes fractured by elements harkening back to substantive conceptions of the rule of law. I suggest that this may be because at its origins the ideal of the rule of law was substantive through and through. I also argue that those origins are older than is generally supposed. Most authors tend to trace the ideas of the rule of law and natural law back to classical Greece, but I show that they are already recognisable and intertwined as far back as Homer. Because the founding moment of the tradition of western intellectual reflection on the rule of law placed concerns about substantive justice at the centre of the rule of law ideal, it may be hard for this ideal to entirely shrug off its substantive content. It may be undesirable, too, given the rhetorical power of appeals to the rule of law. The rule of law means something quite radical in Homer; this meaning may provide a source of normative inspiration for contemporary reflections about the rule of law.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate Bayesian computation (ABC) is a popular family of algorithms which perform approximate parameter inference when numerical evaluation of the likelihood function is not possible but data can be simulated from the model. They return a sample of parameter values which produce simulations close to the observed dataset. A standard approach is to reduce the simulated and observed datasets to vectors of summary statistics and accept when the difference between these is below a specified threshold. ABC can also be adapted to perform model choice. In this article, we present a new software package for R, abctools which provides methods for tuning ABC algorithms. This includes recent dimension reduction algorithms to tune the choice of summary statistics, and coverage methods to tune the choice of threshold. We provide several illustrations of these routines on applications taken from the ABC literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to gain insights into events and issues that may cause errors and outages in parts of IP networks, intelligent methods that capture and express causal relationships online (in real-time) are needed. Whereas generalised rule induction has been explored for non-streaming data applications, its application and adaptation on streaming data is mostly undeveloped or based on periodic and ad-hoc training with batch algorithms. Some association rule mining approaches for streaming data do exist, however, they can only express binary causal relationships. This paper presents the ongoing work on Online Generalised Rule Induction (OGRI) in order to create expressive and adaptive rule sets real-time that can be applied to a broad range of applications, including network telemetry data streams.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The redesign of defined benefit pension schemes usually results in a substantial redistribution of wealth between age cohorts of members, pensioners, and the sponsor. This is the first study to quantify the redistributive effects of a rule change by a real world scheme (the Universities Superannuation Scheme, USS) where the sponsor underwrites the pension promise. In October 2011 USS closed its final salary scheme to new members, opened a career average revalued earnings (CARE) section, and moved to ‘cap and share’ contribution rates. We find that the pre-October 2011 scheme was not viable in the long run, while the post-October 2011 scheme is probably viable in the long run, but faces medium term problems. In October 2011 future members of USS lost 65% of their pension wealth (or roughly £100,000 per head), equivalent to a reduction of roughly 11% in their total compensation, while those aged over 57 years lost almost nothing. The riskiness of the pension wealth of future members increased by a third, while the riskiness of the present value of the sponsor’s future contributions reduced by 10%. Finally, the sponsor’s wealth increased by about £32.5 billion, equivalent to a reduction of 26% in their pension costs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sum of wheat flour and corn starch was replaced by 10, 20, or 30% whole amaranth flour in both conventional (C) and reduced fat (RF) pound cakes. and the effects on physical and sensory properties of the cakes were investigated. RF presented 33% fat reduction. The increasing amaranth levels darkened crust and crumb of cakes, which decreased color acceptability. Fresh amaranth-containing cakes had similar texture characteristics to (he controls, evaluated both instrumentally and sensorially. Sensory evaluation revealed that replacement by 30% amaranth flour decreased C cakes overall acceptability scores, clue to its lower specific volume and darker color. Amaranth flour levels had no significant effect on overall acceptability of RF cakes. Hence, the sum of wheat flour and corn starch could be successfully replaced by up to 20% amaranth flour in C and up to 30% in RF pound cakes without negatively affecting sensory quality in fresh cakes. Moisture losses for all the cakes were similar, approximate to 1% per day during storage. After six days of storage, both C and RF amaranth-containing cakes had higher hardness and chewiness values than control cakes. Further experiments involving sensory evaluation during storage are necessary to determine the exact limit of amaranth flour replacement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a problem called maximum common characters in blocks (MCCB), which arises in applications of approximate string comparison, particularly in the unification of possibly erroneous textual data coming from different sources. We show that this problem is NP-complete, but can nevertheless be solved satisfactorily using integer linear programming for instances of practical interest. Two integer linear formulations are proposed and compared in terms of their linear relaxations. We also compare the results of the approximate matching with other known measures such as the Levenshtein (edit) distance. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We calculate the form factors and the coupling constant in the D*D rho vertex in the framework of QCD sum rules. We evaluate the three-point correlation functions of the vertex considering D, rho and D* mesons off-shell. The form factors obtained are very different but give the same coupling constant: g(D*D rho) = 4.3 +/- 0.9 GeV(-1). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use QCD sum rules to calculate the branching ratio for the production of the meson X(3872) in the decay B -> X(3872)K, assumed to be a mixture between charmonium and exotic molecular vertical bar c (q) over bar vertical bar vertical bar q (c) over bar vertical bar states with J(PC) = 1(++). We find that in a small range for the values of the mixing angle, 5 degrees <= theta <= 13 degrees, we get the branching ratio B(B -> XK) = (1.00 +/- 0.68) x 10(-5), which is in agreement with the experimental upper limit. This result is compatible with the analysis of the mass and decay width of the mode J/psi(n pi) and the radiative decay mode J/psi gamma performed in the same approach. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use QCD sum rules to test the nature of the recently observed mesons Y(4260), Y(4350) and Y(4660), assumed to be exotic four-quark (c (c) over barq (q) over bar) or (c (c) over bars (s) over bar) states with J(PC)= 1(--). We work at leading order in alpha(s), consider the contributions of higher dimension condensates and keep terms which are linear in the strange quark mass m(s). We find for the (c (c) over bars (s) over bar) state a mass in m(Y) = (4.65 +/- 0.10) GeV which is compatible with the experimental candidate Y (4660), while for the (c (c) over barq (q) over bar) state we find a mass in m(Y) = (4.49 +/- 0.11) GeV, which is still consistent with the mass of the experimental candidate Y(4350). With the tetraquark structure we are working we cannot explain the Y(4260) as a tetraquark state. We also consider molecular D(s0)(D) over bar (s)* and D(0)(D) over bar* states. For the D(s0)(D) over bar (s)* molecular state we get m(Ds0 (D) over bars*) = (4.42 +/- 0.10) GeV which is consistent, considering the errors, with the mass of the meson Y(4350) and for the D(0)(D) over bar* molecular state we get m(D0 (D) over bar*) = (4.27 +/- 0.10) GeV in excellent agreement with the mass of the meson Y(4260). (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use QCD sum rules to study the recently observed meson Z(+)(4430), considered as a D*D-1 molecule with J(P) = 0(-). We consider the contributions of condensates up to dimension eight and work at leading order in alpha(s). We get m(Z) = (4.40 +/- 0.10) GeV in a very good agreement with the experimental value. We also make predictions for the analogous mesons Z(s) and Z(bb) considered as D-s*D-1 and B*B-1 molecules, respectively. For Z(s) we predict mZ(s) = (4.70 +/- 0.06) GeV, which is above the D-s* D-1 threshold, indicating that it is probably a very broad state and, therefore, difficult to observe experimentally. For Z(bb) we predict m(Zbb) = (10.74 +/- 0.12) GeV, in agreement with quark model predictions. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We calculate the form factors and the coupling constant in the rho D*D* vertex in the framework of QCD sum rules. We evaluate the three point correlation functions of the vertex considering both rho and D* mesons off-shell. The form factors obtained are very different but give the same coupling constant: g rho D*D* = 6.60 +/- 0.31. This number is 50% larger than what we would expect from SU(4) estimates. (c) 2007 Elsevier B.V. All rights reserved.