925 resultados para Slide-rule.
Resumo:
This article presents a new series of monthly equity returns for the British stock market for the period 1825-1870. In addition to calculating capital appreciation and dividend yields, the article also estimates the effect of survivorship bias on returns. Three notable findings emerge from this study. First, stock market returns in the 1825-1870 period are broadly similar for Britain and the United States, although the British market is less risky. Second, real returns in the 1825-1870 period are higher than in subsequent epochs of British history. Third, unlike the modern era, dividends are the most important component of returns.
Resumo:
This article evaluates the anti-corruption campaign instituted in Nigeria following on the post-authoritarian transition in the country, with specific focus on political corruption. The anti-corruption campaign is being prosecuted within a context where law is as critical a factor as politics. This article examines whether the judiciary, in view of its accountability deficit, can offer legitimacy to the campaign. How has its questionable credentials impacted on its involvement in the campaign to sanitise public life? What has been the impact of the judicial role on the rule of law? These are some of the important questions this article seeks to answer. The inquiry in this article demonstrates how the guardian institution of the rule of law faces an uphill task in the performance of that role in a post-authoritarian context.
Resumo:
P>Seven cases were discussed by an expert panel at the 2009 Annual Scientific Meeting of the British Society of Haematology. These cases are presented in a similar format to that adopted for the meeting. There was an initial discussion of the presenting morphology, generation of differential diagnoses and then, following display of further presenting and diagnostic information, each case was concluded with provision of a final diagnosis.
Resumo:
In this article we intoduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre-and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean leaxning time increases with the number of patterns to be learned polynomially, indicating efficient learning.
Resumo:
In this preliminary study, we investigate how inconsistency in a network intrusion detection rule set can be measured. To achieve this, we first examine the structure of these rules which are based on Snort and incorporate regular expression (Regex) pattern matching. We then identify primitive elements in these rules in order to translate the rules into their (equivalent) logical forms and to establish connections between them. Additional rules from background knowledge are also introduced to make the correlations among rules more explicit. We measure the degree of inconsistency in formulae of such a rule set (using the Scoring function, Shapley inconsistency values and Blame measure for prioritized knowledge) and compare the informativeness of these measures. Finally, we propose a new measure of inconsistency for prioritized knowledge which incorporates the normalized number of atoms in a language involved in inconsistency to provide a deeper inspection of inconsistent formulae. We conclude that such measures are useful for the network intrusion domain assuming that introducing expert knowledge for correlation of rules is feasible.
Resumo:
Belief revision characterizes the process of revising an agent’s beliefs when receiving new evidence. In the field of artificial intelligence, revision strategies have been extensively studied in the context of logic-based formalisms and probability kinematics. However, so far there is not much literature on this topic in evidence theory. In contrast, combination rules proposed so far in the theory of evidence, especially Dempster rule, are symmetric. They rely on a basic assumption, that is, pieces of evidence being combined are considered to be on a par, i.e. play the same role. When one source of evidence is less reliable than another, it is possible to discount it and then a symmetric combination operation
is still used. In the case of revision, the idea is to let prior knowledge of an agent be altered by some input information. The change problem is thus intrinsically asymmetric. Assuming the input information is reliable, it should be retained whilst the prior information should be changed minimally to that effect. To deal with this issue, this paper defines the notion of revision for the theory of evidence in such a way as to bring together probabilistic and logical views. Several revision rules previously proposed are reviewed and we advocate one of them as better corresponding to the idea of revision. It is extended to cope with inconsistency between prior and input information. It reduces to Dempster
rule of combination, just like revision in the sense of Alchourron, Gardenfors, and Makinson (AGM) reduces to expansion, when the input is strongly consistent with the prior belief function. Properties of this revision rule are also investigated and it is shown to generalize Jeffrey’s rule of updating, Dempster rule of conditioning and a form of AGM revision.
Resumo:
Norms constitute a powerful coordination mechanism among heterogeneous agents. In this paper, we propose a rule language to specify and explicitly manage the normative positions of agents (permissions, prohibitions and obligations), with which distinct deontic notions and their relationships can be captured. Our rule-based formalism includes constraints for more expressiveness and precision and allows to supplement (and implement) electronic institutions with norms. We also show how some normative aspects are given computational interpretation. © 2008 Springer Science+Business Media, LLC.