801 resultados para Game rule encoding
Resumo:
In 'Avalanche', an object is lowered, players staying in contact throughout. Normally the task is easily accomplished. However, with larger groups counter-intuitive behaviours appear. The paper proposes a formal theory for the underlying causal mechanisms. The aim is to not only provide an explicit, testable hypothesis for the source of the observed modes of behaviour-but also to exemplify the contribution that formal theory building can make to understanding complex social phenomena. Mapping reveals the importance of geometry to the Avalanche game; each player has a pair of balancing loops, one involved in lowering the object, the other ensuring contact. For more players, sets of balancing loops interact and these can allow dominance by reinforcing loops, causing the system to chase upwards towards an ever-increasing goal. However, a series of other effects concerning human physiology and behaviour (HPB) is posited as playing a role. The hypothesis is therefore rigorously tested using simulation. For simplicity a 'One Degree of Freedom' case is examined, allowing all of the effects to be included whilst rendering the analysis more transparent. Formulation and experimentation with the model gives insight into the behaviours. Multi-dimensional rate/level analysis indicates that there is only a narrow region in which the system is able to move downwards. Model runs reproduce the single 'desired' mode of behaviour and all three of the observed 'problematic' ones. Sensitivity analysis gives further insight into the system's modes and their causes. Behaviour is seen to arise only when the geometric effects apply (number of players greater than degrees of freedom of object) in combination with a range of HPB effects. An analogy exists between the co-operative behaviour required here and various examples: conflicting strategic objectives in organizations; Prisoners' Dilemma and integrated bargaining situations. Additionally, the game may be relatable in more direct algebraic terms to situations involving companies in which the resulting behaviours are mediated by market regulations. Finally, comment is offered on the inadequacy of some forms of theory building and the case is made for formal theory building involving the use of models, analysis and plausible explanations to create deep understanding of social phenomena.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
Induction of classification rules is one of the most important technologies in data mining. Most of the work in this field has concentrated on the Top Down Induction of Decision Trees (TDIDT) approach. However, alternative approaches have been developed such as the Prism algorithm for inducing modular rules. Prism often produces qualitatively better rules than TDIDT but suffers from higher computational requirements. We investigate approaches that have been developed to minimize the computational requirements of TDIDT, in order to find analogous approaches that could reduce the computational requirements of Prism.
Resumo:
The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.
Resumo:
Polygalacturonase-inhibiting proteins (PGIPs) are extracellular plant inhibitors of fungal endopolygalacturonases (PGs) that belong to the superfamily of Leu-rich repeat proteins. We have characterized the full complement of pgip genes in the bean (Phaseolus vulgaris) genotype BAT93. This comprises four clustered members that span a 50-kb region and, based on their similarity, form two pairs (Pvpgip1/Pvpgip2 and Pvpgip3/Pvpgip4). Characterization of the encoded products revealed both partial redundancy and subfunctionalization against fungal-derived PGs. Notably, the pair PvPGIP3/PvPGIP4 also inhibited PGs of two mirid bugs (Lygus rugulipennis and Adelphocoris lineolatus). Characterization of Pvpgip genes of Pinto bean showed variations limited to single synonymous substitutions or small deletions. A three-amino acid deletion encompassing a residue previously identified as crucial for recognition of PG of Fusarium moniliforme was responsible for the inability of BAT93 PvPGIP2 to inhibit this enzyme. Consistent with the large variations observed in the promoter sequences, reverse transcription-PCR expression analysis revealed that the different family members differentially respond to elicitors, wounding, and salicylic acid. We conclude that both biochemical and regulatory redundancy and subfunctionalization of pgip genes are important for the adaptation of plants to pathogenic fungi and phytophagous insects.
Shaming men, performing power: female authority in Zimbabwe and Tanzania on the eve of colonial rule
Resumo:
In this paper we consider transcripts which originated from a practical series of Turing’s Imitation Game which was held on 23rd June 2012 at Bletchley Park, England. In some cases the tests involved a 3-participant simultaneous comparison of two hidden entities whereas others were the result of a direct 2-participant interaction. Each of the transcripts considered here resulted in a human interrogator being fooled, by a machine, into concluding that they had been conversing with a human. Particular features of the conversation are highlighted, successful ploys on the part of each machine discussed and likely reasons for the interrogator being fooled are considered. Subsequent feedback from the interrogators involved is also included
Resumo:
This paper studies the exclusion of potential competition as a motivating factor for international mergers. We propose a simple game-theoretic framework in order to discuss the conditions under which mergers that prevent reciprocal domestic competition will occur. Our analysis highlights the shortcomings of antitrust policies based on pre-merger/post-merger concentration comparisons. A review of several recent European cases suggests that actual merger policy often fails to consider potential competition.
Resumo:
We report experimental results on a prisoners' dilemma implemented in a way which allows us to elicit incentive−compatible valuations of the game. We test the hypothesis that players' valuations coincide with their Nash equilibrium earnings. Our results offer significantly less support for this hypothesis than for the prediction of Dominant Strategy (DS) play.
Resumo:
Kasparov-World, initiated by Microsoft and also sponsored by First USA, was a novel correspondence game played on the World Wide Web at one ply per day. This was the first time that any group had attempted to form on the Web and then solve shared problems against fixed, short-term deadlines. The first author first became involved in his role as a Web consultant, observing the dynamics and effectiveness of the group. These are fully described, together with observations on the technological contribution and the second author's post-hoc computation of some relevant Endgame Tables.