858 resultados para weighted majority game


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new bargaining set based on notions of both internal and external stability is developed in the context of endogenous coalition formation. It allows to make an explicit distinction between within-group and outside-group deviation options. This type of distinction is not present in current bargaining sets. For the class of monotonic proper simple games, the outcomes in the bargaining set are characterized. Furthermore, it is shown that the bargaining set of any homogeneous weighted majority game contains an outcome for which the underlying coalition structure consists of a minimal winning coalition and its complement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that every additively representable comparative probability order on n atoms is determined by at least n - 1 binary subset comparisons. We show that there are many orders of this kind, not just the lexicographic order. These results provide answers to two questions of Fishburn et al (2002). We also study the flip relation on the class of all comparative probability orders introduced by Maclagan. We generalise an important theorem of Fishburn, Peke?c and Reeds, by showing that in any minimal set of comparisons that determine a comparative probability order, all comparisons are flippable. By calculating the characteristics of the flip relation for n = 6 we discover that the regions in the corresponding hyperplane arrangement can have no more than 13 faces and that there are 20 regions with 13 faces. All the neighbours of the 20 comparative probability orders which correspond to those regions are representable. Finally we define a class of simple games with complete desirability relation for which its strong desirability relation is acyclic, and show that the flip relation carries all the information about these games. We show that for n = 6 these games are weighted majority games.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is comprised of three chapters, each of which is concerned with properties of allocational mechanisms which include voting procedures as part of their operation. The theme of interaction between economic and political forces recurs in the three chapters, as described below.

Chapter One demonstrates existence of a non-controlling interest shareholders' equilibrium for a stylized one-period stock market economy with fewer securities than states of the world. The economy has two decision mechanisms: Owners vote to change firms' production plans across states, fixing shareholdings; and individuals trade shares and the current production / consumption good, fixing production plans. A shareholders' equilibrium is a production plan profile, and a shares / current good allocation stable for both mechanisms. In equilibrium, no (Kramer direction-restricted) plan revision is supported by a share-weighted majority, and there exists no Pareto superior reallocation.

Chapter Two addresses efficient management of stationary-site, fixed-budget, partisan voter registration drives. Sufficient conditions obtain for unique optimal registrar deployment within contested districts. Each census tract is assigned an expected net plurality return to registration investment index, computed from estimates of registration, partisanship, and turnout. Optimum registration intensity is a logarithmic transformation of a tract's index. These conditions are tested using a merged data set including both census variables and Los Angeles County Registrar data from several 1984 Assembly registration drives. Marginal registration spending benefits, registrar compensation, and the general campaign problem are also discussed.

The last chapter considers social decision procedures at a higher level of abstraction. Chapter Three analyzes the structure of decisive coalition families, given a quasitransitive-valued social decision procedure satisfying the universal domain and ITA axioms. By identifying those alternatives X* ⊆ X on which the Pareto principle fails, imposition in the social ranking is characterized. Every coaliton is weakly decisive for X* over X~X*, and weakly antidecisive for X~X* over X*; therefore, alternatives in X~X* are never socially ranked above X*. Repeated filtering of alternatives causing Pareto failure shows states in X^n*~X^((n+1))* are never socially ranked above X^((n+1))*. Limiting results of iterated application of the *-operator are also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

提出了基于信任度的自组安全互操作方法,引入信任度描述自治域和用户正确参与协作的概率.自治域对用户的信任度由二者的直接交互经验以及其他域对用户的评价共同决定,满足信任策略要求的用户允许执行角色.用户的恶意历史行为将会降低其信任度,从而影响执行角色的范围.自治域对其他域的信任度由对用户的评价与直接经验的偏差根据加权主要算法反馈更新.自治域的恶意评价影响其推荐信息的可信程度.实验结果表明,该方法能够有效地抵御欺骗和恶意行为.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Algorithms for concept drift handling are important for various applications including video analysis and smart grids. In this paper we present decision tree ensemble classication method based on the Random Forest algorithm for concept drift. The weighted majority voting ensemble aggregation rule is employed based on the ideas of Accuracy Weighted Ensemble (AWE) method. Base learner weight in our case is computed for each sample evaluation using base learners accuracy and intrinsic proximity measure of Random Forest. Our algorithm exploits both temporal weighting of samples and ensemble pruning as a forgetting strategy. We present results of empirical comparison of our method with îriginal random forest with incorporated replace-the-looser forgetting andother state-of-the-art concept-drift classiers like AWE2.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work, an improvement of the results presented by [1] Abellanas et al. (Weak Equilibrium in a Spatial Model. International Journal of Game Theory, 40(3), 449-459) is discussed. Concretely, this paper investigates an abstract game of competition between two players that want to earn the maximum number of points from a finite set of points in the plane. It is assumed that the distribution of these points is not uniform, so an appropriate weight to each position is assigned. A definition of equilibrium which is weaker than the classical one is included in order to avoid the uniqueness of the equilibrium position typical of the Nash equilibrium in these kinds of games. The existence of this approximated equilibrium in the game is analyzed by means of computational geometry techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Authorised users (insiders) are behind the majority of security incidents with high financial impacts. Because authorisation is the process of controlling users’ access to resources, improving authorisation techniques may mitigate the insider threat. Current approaches to authorisation suffer from the assumption that users will (can) not depart from the expected behaviour implicit in the authorisation policy. In reality however, users can and do depart from the canonical behaviour. This paper argues that the conflict of interest between insiders and authorisation mechanisms is analogous to the subset of problems formally studied in the field of game theory. It proposes a game theoretic authorisation model that can ensure users’ potential misuse of a resource is explicitly considered while making an authorisation decision. The resulting authorisation model is dynamic in the sense that its access decisions vary according to the changes in explicit factors that influence the cost of misuse for both the authorisation mechanism and the insider.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research demonstrates that education programs are more effective when their target audiences and other key stakeholder groups are involved in the design. The majority of programs continue to be researcher and expert driven. This study extends upon previous research by employing a co-creation research design to offer a consumer driven alternative to education program design. Two co-creation groups involving twenty 14-16 year old Year 10 students who had previously participated in the Game On:Know Alcohol (GOKA) program, which aims to moderate alcohol drinking attitudes and behaviour, were conducted. Analysis revealed that a co-created GOKA program will differ substantially from the researcher and expert driven program that is currently being field tested. Students prefer interactive activities and activities that engage and challenge. Co-creation offers the potential to contest researcher and expert views and may offer to assist in the generation of new insights for the development of education programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As an animator and practice-based researcher with a background in games development, I am interested in technological change in the video game medium, with a focus on the tools and technologies that drive game character animation and interactive story. In particular, I am concerned with the issue of ‘user agency’, or the ability of the end user to affect story development—a key quality of the gaming experience and essential to the aesthetics of gaming, which is defined in large measure by its interactive elements. In this paper I consider the unique qualities of the video game1 as an artistic medium and the impact that these qualities have on the production of animated virtual character performances. I discuss the somewhat oppositional nature of animated character performances found in games from recent years, which range from inactive to active—in other words, low to high agency. Where procedural techniques (based on coded rules of movement) are used to model dynamic character performances, the user has the ability to interactively affect characters in real-time within the larger sphere of the game. This game play creates a high degree of user agency. However, it lacks the aesthetic nuances of the more crafted sections of games: the short cut-scenes, or narrative interludes where entire acted performances are mapped onto game characters (often via performance capture)2 and constructed into relatively cinematic representations. While visually spectacular, cut-scenes involve minimal interactivity, so user agency is low. Contemporary games typically float between these two distinct methods of animation, from a focus on user agency and dynamically responsive animation to a focus on animated character performance in sections where the user is a passive participant. We tend to think of the majority of action in games as taking place via playable figures: an avatar or central character that represents a player. However, there is another realm of characters that also partake in actions ranging from significant to incidental: non-playable characters, or NPCs, which populate action sequences where game play takes place as well as cut scenes that unfold without much or any interaction on the part of the player. NPCs are the equivalent to supporting roles, bit characters, or extras in the world of cinema. Minor NPCs may simply be background characters or enemies to defeat, but many NPCs are crucial to the overall game story. It is my argument that, thus far, no game has successfully utilized the full potential of these characters to contribute toward development of interactive, high performance action. In particular, a type of NPC that I have identified as ‘pivotal’3—those constituting the supporting cast of a video game—are essential to the telling of a game story, particularly in genres that focus on story and characters: adventure games, action games, and role-playing games. A game story can be defined as the entirety of the narrative, told through non-interactive cut-scenes as well a interactive sections of play, and development of more complex stories in games clearly impacts the animation of NPCs. I argue that NPCs in games must be capable of acting with emotion throughout a game—in the cutscenes, which are tightly controlled, but also in sections of game play, where player agency can potentially alter the story in real-time. When the animated performance of NPCs and user agency are not continuous throughout the game, the implication is that game stories may be primarily told through short movies within games, making it more difficult to define video games animation as a distinct artistic medium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.

The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.

The main contributions of the thesis can be placed in one of the following categories.

1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.

2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.

3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.

4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental research has shown that playing violent video games produces higher levels of aggressive cognition, aggressive affect, physiological arousal, and aggressive behavior (in the short-term) than non-violent video games (see Anderson, Gentile & Buckley, 2007). However, there are two major limitations with these studies. First, the majority of experimental studies that have compared the effects of violent versus non-violent video games on aggression have failed to equate these games in terms of competitiveness, difficulty, and pace of action. Thus, although the common finding is that violent video games produce higher levels of aggression than nonviolent video games, other unmatched factors beyond the actual violent content may be responsible for the elevated levels of aggression. Second, previous experimental studies have tended to use a measure of aggression that may also measure competitiveness, leading to questions about whether violent video games are related to aggression or competitiveness. The present thesis addressed these two issues by fIrst equating a violent and non-violent video game on competitiveness, difficulty and pace of action in Experiment I , and then comparing the effect of each game on aggressive behavior using an unambiguous measure of aggressive behavior (i.e., the Hot Sauce Paradigm). We found that video game violence was not sufficient to elevate aggressive behavior compared to a non-violent video game. Practical implications and directions for future research are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Limited academic attention has been given to the nexus between corruption in soccer and its impact on fandom. Consequently, the purpose of this qualitative study was to better understand the lived experiences of highly identified soccer fanatics living through this era of match fixing in the sport. Social networking site Twitter was utilized to recruit participants from three continents – Africa, Europe, and North America – based on submissions to the site in response to a perceived fix from a high-profile March, 2013 match. A total of 12 semi-structured interviews were conducted with highly identified soccer fans in accordance with Funk and James’ (2001) Psychological Continuum Model (PCM). Despite the majority of participants feeling skepticism about the purity of soccer today, half of the participants’ fandom remained unchanged in the face of perceived match fixing. Directions for future research and recommendations are considered and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since mid-1990s, companies have adopted agile methods and incorporated them in their development methodologies. For this reason, future project managers and developers need to have a full understanding of these methods. At present, the university’s approach to agile methods is theoretical and is not reflected during the development of a product and their practical use. The purpose of this project is the creation of a software system in the form of a game, named Agile Game, which simulates their use. The system is designed for use as supplementary material in lectures, to help students understand agile methods, to present their use within a project, and to demonstrate how they differ from traditional project management methodologies. The final system, which is web based, was implemented using PHP, MySQL and JavaScript. It was fully tested against the requirements and evaluated by peer students. The evaluation showed that the majority of users were satisfied with the system but they thought that it should contain more detailed information at every step of the game. For this reason, some parts of the design and the content were reviewed to meet user requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following the US model, the UK has seen considerable innovation in the funding, finance and procurement of real estate in the last decade. In the growing CMBS market asset backed securitisations have included $2.25billion secured on the Broadgate office development and issues secured on Canary Wharf and the Trafford Centre regional mall. Major occupiers (retailer Sainsbury’s, retail bank Abbey National) have engaged in innovative sale & leaseback and outsourcing schemes. Strong claims are made concerning the benefits of such schemes – e.g. British Land were reported to have reduced their weighted cost of debt by 150bp as a result of the Broadgate issue. The paper reports preliminary findings from a project funded by the Corporation of London and the RICS Research Foundation examining a number of innovative schemes to identify, within a formal finance framework, sources of added value and hidden costs. The analysis indicates that many of the gains claimed conceal costs – in terms of market value of debt or flexibility of management – while others result from unusual firm or market conditions (for example utilising the UK long lease and the unusual shape of the yield curve). Nonetheless, there are real gains resulting from the innovations, reflecting arbitrage and institutional constraints in the direct (private) real estate market