20 resultados para Game of rules
Resumo:
This article draws on qualitative research that explores the concept of public value in the delivery of sport services by the organization Sport England. The research took place against a backdrop of shifting priorities following the award of the 2012 Olympic Games to London. It highlights the difficulties that exist in measuring the qualitative nature of the public value of sport and suggests there is a need to understand better the idea. Research with organizations involved alongside Sport England in the delivery of sport is described. This explores the potential to create a public value vision, how to measure it and how to focus public value on delivery beyond the aim of ‘sport for sports sake’ and more towards ‘sport for the greater good’. The article argues that this represents a game of ‘two halves’ in which the first half focuses on 2012 with the second half concerned with its legacy.
Resumo:
This paper explores the law of accidental mixtures of goods. It traces the development of the English rules on mixture from the seminal nineteenth century case of Spence v Union Marine Insurance Co to the present day, and compares their responses to those given by the Roman law, which always has been claimed as an influence on our jurisprudence in this area. It is argued that the different answers given by English and Roman law to essentially the same problems of title result from the differing bases of these legal systems. Roman a priori theory is contrasted with the more practical reasoning of the common law, and while both sets of rules are judged to be coherent on their own terms, it is suggested that the difference between them is reflective of a more general philosophical disagreement about the proper functioning of a legal system, and the relative importance of theoretical and pragmatic considerations.
Resumo:
Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.
Resumo:
Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs) are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.
Resumo:
The results of three experiments investigating the role of deductive inference in Wason's selection task are reported. In Experiment 1, participants received either a standard one-rule problem or a task containing a second rule, which specified an alternative antecedent. Both groups of participants were asked to select those cards that they considered were necessary to test whether the rule common to both problems was true or false. The results showed a significant suppression of q card selections in the two-rule condition. In addition there was weak evidence for both decreased p selection and increased not-q selection. In Experiment 2 we again manipulated number of rules and found suppression of q card selections only. Finally, in Experiment 3 we compared one- and two-rule conditions with a two-rule condition where the second rule specified two alternative antecedents in the form of a disjunction. The q card selections were suppressed in both of the two-rule conditions but there was no effect of whether the second rule contained one or two alternative antecedents. We argue that our results support the claim that people make inferences about the unseen side of the cards when engaging with the indicative selection task.
Resumo:
In this preliminary case study, we investigate how inconsistency in a network intrusion detection rule set can be measured. To achieve this, we first examine the structure of these rules which incorporate regular expression (Regex) pattern matching. We then identify primitive elements in these rules in order to translate the rules into their (equivalent) logical forms and to establish connections between them. Additional rules from background knowledge are also introduced to make the correlations among rules more explicit. Finally, we measure the degree of inconsistency in formulae of such a rule set (using the Scoring function, Shapley inconsistency values and Blame measure for prioritized knowledge) and compare the informativeness of these measures. We conclude that such measures are useful for the network intrusion domain assuming that incorporating domain knowledge for correlation of rules is feasible.
Resumo:
In Contingent Valuation studies, researchers often base their definition of the environmental good on scientific/expert consensus. However, respondents may not hold this same commodity definition prior to the transaction. This raises questions as to the potential for staging a satisfactory transaction, based on Fischoff and Furby's (1988) criteria. Some unresolved issues regarding the provision of information to respondents to facilitate such a transaction are highlighted. In this paper, we apply content analysis to focus group discussions and develop a set of rules which take account of the non-independence of group data to explore whether researcher and respondents' prior definitions are in any way similar. We use the results to guide information provision in a subsequent questionnaire.
Resumo:
The ability to project oneself into the future to pre-experience an event is referred to as episodic future thinking (Atance & O’Neill, 2001). Only a relatively small number of studies have attempted to measure this ability in pre-school aged children (Atance & Meltzoff, 2005; Busby & Suddendorf, 2005ab, 2010; Russell, Alexis, & Clayton, 2010).Perhaps the most successful method is that used by Russell et al (2010). In this task, 3- to 5-year-olds played a game of blow football on one end of a table. After this children were asked to select tools that would enable them to play the same game tomorrow from the opposite, unreachable, side of the table. Results indicated that only 5-year-olds were capable of selecting the right objects for future use more often than would be expected by chance. Above-chance performance was observed in this older group even though most children failed the task because there was a low probability of selecting the correct 2 objects from a choice of 6 by chance.This study aimed to identify the age at which children begin to consistently pass this type of task. Three different tasks were designed in which children played a game on one side of a table, and then were asked to choose a tool to play a similar game on the other side of the table the next day. For example, children used a toy fishing rod to catch magnetic fish on one side of the table; playing the same game from the other side of the table required a different type of fishing rod. At test, children chose between just 2 objects: the tool they had already used, which would not work on the other side, and a different tool that they had not used before but which was suitable for the other side of the table. Experiment 1: Forty-eight 4-year-olds (M = 53.6 months, SD = 2.9) took part. These children were assigned to one of two conditions: a control condition (present-self) where the key test questions were asked in the present tense and an experimental condition (future-self) where the questions were in the future tense. Surprisingly, the results showed that both groups of 4-year-olds selected the correct tool at above chance levels (Table 1 shows the mean number of correct answers out of three). However, the children could see the apparatus when they answered the test questions and so perhaps answered them correctly without imagining the future. Experiment 2: Twenty-four 4-year-olds (M = 53.7, SD = 3.1) participated. Pre-schoolers in this study experienced one condition: future-self looking-away. In this condition children were asked to turn their backs to the games when answering the test questions, which were in the future tense. Children again performed above chance levels on all three games.Contrary to the findings of Russell et al. (2010), our results suggest that episodic future thinking skills could be present in 4-year-olds, assuming that this is what is measured by the tasks. Table 1. Mean number of correct answers across the three games in Experiments 1 and 2Experimental Conditions (N=24 in each condition)Mean CorrectStandardDeviationStatistical SignificanceExp. 1 (present-self, look) – 2 items2.750.68p < 0.001Exp. 1 (future-self, look) – 2 items 2.790.42p < 0.001Exp. 2 (future-self, away) – 2 items 2.330.64p < 0.001Exp. 3 (future-self away) – 3 items1.210.98p = 0.157
Resumo:
In this paper we advocate the Loop-of-stencil-reduce pattern as a way to simplify the parallel programming of heterogeneous platforms (multicore+GPUs). Loop-of-Stencil-reduce is general enough to subsume map, reduce, map-reduce, stencil, stencil-reduce, and, crucially, their usage in a loop. It transparently targets (by using OpenCL) combinations of CPU cores and GPUs, and it makes it possible to simplify the deployment of a single stencil computation kernel on different GPUs. The paper discusses the implementation of Loop-of-stencil-reduce within the FastFlow parallel framework, considering a simple iterative data-parallel application as running example (Game of Life) and a highly effective parallel filter for visual data restoration to assess performance. Thanks to the high-level design of the Loop-of-stencil-reduce, it was possible to run the filter seamlessly on a multicore machine, on multi-GPUs, and on both.
Resumo:
We discuss the limitations and rights which may affect the researcher’s access to and use of digital, court and administrative tribunal based information. We suggest that there is a need for a European-wide investigation of the legal framework which affects the researcher who might wish to utilise this form of information. A European-wide context is required because much of the relevant law is European rather than national, but much of the constraints are cultural. It is our thesis that research improves understanding and then improves practice as that understanding becomes part of public debate. If it is difficult to undertake research, then public debate about the court system – its effectiveness, its biases, its strengths – becomes constrained. Access to court records is currently determined on a discretionary basis or on the basis of interpretation of rules of the court where these are challenged in legal proceedings. Anecdotal evidence would suggest that there are significant variations in the extent to which court documents such as pleadings, transcripts, affidavits etc are made generally accessible under court rules or as a result of litigation in different jurisdictions or, indeed, in different courts in the same jurisdiction. Such a lack of clarity can only encourage a chilling of what might otherwise be valuable research. Courts are not, of course, democratic bodies. However, they are part of a democratic system and should, we suggest – both for the public benefit and for their proper operation – be accessible and criticisable by the independent researcher. The extent to which the independent researcher is enabled access is the subject of this article. The rights of access for researchers and the public have been examined in other common law countries but not, to date, in the UK or Europe.
Resumo:
We investigate how a group of players might cooperate with each other within the setting of a non-cooperative game. We pursue two notions of partial cooperative equilibria that follow a modification of Nash's best response rationality rather than a core-like approach. Partial cooperative Nash equilibrium treats non-cooperative players and the coalition of cooperators symmetrically, while the notion of partial cooperative leadership equilibrium assumes that the group of cooperators has a first-mover advantage. We prove existence theorems for both types of equilibria. We look at three well-known applications under partial cooperation. In a game of voluntary provision of a public good we show that our two new equilibrium notions of partial cooperation coincide. In a modified Cournot oligopoly, we identify multiple equilibria of each type and show that a non-cooperator may have a higher payoff than a cooperator. In contrast, under partial cooperation in a symmetric Salop City game, a cooperator enjoys a higher return.
Resumo:
In this preliminary study, we investigate how inconsistency in a network intrusion detection rule set can be measured. To achieve this, we first examine the structure of these rules which are based on Snort and incorporate regular expression (Regex) pattern matching. We then identify primitive elements in these rules in order to translate the rules into their (equivalent) logical forms and to establish connections between them. Additional rules from background knowledge are also introduced to make the correlations among rules more explicit. We measure the degree of inconsistency in formulae of such a rule set (using the Scoring function, Shapley inconsistency values and Blame measure for prioritized knowledge) and compare the informativeness of these measures. Finally, we propose a new measure of inconsistency for prioritized knowledge which incorporates the normalized number of atoms in a language involved in inconsistency to provide a deeper inspection of inconsistent formulae. We conclude that such measures are useful for the network intrusion domain assuming that introducing expert knowledge for correlation of rules is feasible.
Resumo:
Fuzzy-neural-network-based inference systems are well-known universal approximators which can produce linguistically interpretable results. Unfortunately, their dimensionality can be extremely high due to an excessive number of inputs and rules, which raises the need for overall structure optimization. In the literature, various input selection methods are available, but they are applied separately from rule selection, often without considering the fuzzy structure. This paper proposes an integrated framework to optimize the number of inputs and the number of rules simultaneously. First, a method is developed to select the most significant rules, along with a refinement stage to remove unnecessary correlations. An improved information criterion is then proposed to find an appropriate number of inputs and rules to include in the model, leading to a balanced tradeoff between interpretability and accuracy. Simulation results confirm the efficacy of the proposed method.