909 resultados para Game theory.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reduction of carbon emissions is of paramount importance in the context of global warming and climate change. Countries and global companies are now engaged in understanding systematic ways of solving carbon economics problems, aimed ultimately at achieving well defined emission targets. This paper proposes mechanism design as an approach to solving carbon economics problems. The paper first introduces carbon economics issues in the world today and next focuses on carbon economics problems facing global industries. The paper identifies four problems faced by global industries: carbon credit allocation (CCA), carbon credit buying (CCB), carbon credit selling (CCS), and carbon credit exchange (CCE). It is argued that these problems are best addressed as mechanism design problems. The discipline of mechanism design is founded on game theory and is concerned with settings where a social planner faces the problem of aggregating the announced preferences of multiple agents into a collective decision, when the actual preferences are not known publicly. The paper provides an overview of mechanism design and presents the challenges involved in designing mechanisms with desirable properties. To illustrate the application of mechanism design in carbon economics,the paper describes in detail one specific problem, the carbon credit allocation problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate the problem of influence limitation in the presence of competing campaigns in a social network. Given a negative campaign which starts propagating from a specified source and a positive/counter campaign that is initiated, after a certain time delay, to limit the the influence or spread of misinformation by the negative campaign, we are interested in finding the top k influential nodes at which the positive campaign may be triggered. This problem has numerous applications in situations such as limiting the propagation of rumor, arresting the spread of virus through inoculation, initiating a counter-campaign against malicious propaganda, etc. The influence function for the generic influence limitation problem is non-submodular. Restricted versions of the influence limitation problem, reported in the literature, assume submodularity of the influence function and do not capture the problem in a realistic setting. In this paper, we propose a novel computational approach for the influence limitation problem based on Shapley value, a solution concept in cooperative game theory. Our approach works equally effectively for both submodular and non-submodular influence functions. Experiments on standard real world social network datasets reveal that the proposed approach outperforms existing heuristics in the literature. As a non-trivial extension, we also address the problem of influence limitation in the presence of multiple competing campaigns.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The scope of the paper is the literature that employs coordination games to study social norms and conventions from the viewpoint of game theory and cognitive psychology. We claim that those two alternative approaches are complementary, as they provide different insights to explain how people converge to a unique system of self-fulfilling expectations in presence of multiple, equally viable, conventions. While game theory explains the emergence of conventions relying on efficiency and risk considerations, the psychological view is more concerned with frame and labeling effects. The interaction between these alternative (and, sometimes, competing) effects leads to the result that coordination failures may well occur and, even when coordination takes place, there is no guarantee that the convention eventually established will be the most efficient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Published as article in: Journal of Economic Methodology, 2010, vol. 17, issue 3, pages 261-275.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report the findings of an experiment designed to study how people learn and make decisions in network games. Network games offer new opportunities to identify learning rules, since on networks (compared to e.g. random matching) more rules differ in terms of their information requirements. Our experimental design enables us to observe both which actions participants choose and which information they consult before making their choices. We use this information to estimate learning types using maximum likelihood methods. There is substantial heterogeneity in learning types. However, the vast majority of our participants' decisions are best characterized by reinforcement learning or (myopic) best-response learning. The distribution of learning types seems fairly stable across contexts. Neither network topology nor the position of a player in the network seem to substantially affect the estimated distribution of learning types.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Para quien es de utilidad: •Alumnos de Tª Microeconómica IV, curso 3º LE. •Alumnos de las asignaturas de Tª de Juegos y Organización Industrial del Máster en Economía: Instrumentos del Análisis Económico. Estas notas sobre competencia imperfecta están dedicadas al estudio de estructuras de mercado caracterizadas por la existencia de poder de mercado. Se estudia en primer lugar el monopolio, dedicando una atención especial a los diferentes tipos de discriminación de precios. A continuación se presenta la Tª de juegos no cooperativos y se muestra su utilidad para analizar diferentes fenómenos económicos caracterizados por la interdependencia estratégica. Finalmente, se estudian diferentes modelos de competencia oligopolística y la estabilidad de los acuerdos colusivos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is a growing amount of experimental evidence that suggests people often deviate from the predictions of game theory. Some scholars attempt to explain the observations by introducing errors into behavioral models. However, most of these modifications are situation dependent and do not generalize. A new theory, called the rational novice model, is introduced as an attempt to provide a general theory that takes account of erroneous behavior. The rational novice model is based on two central principals. The first is that people systematically make inaccurate guesses when they are evaluating their options in a game-like situation. The second is that people treat their decisions similar to a portfolio problem. As a result, non optimal actions in a game theoretic sense may be included in the rational novice strategy profile with positive weights.

The rational novice model can be divided into two parts: the behavioral model and the equilibrium concept. In a theoretical chapter, the mathematics of the behavioral model and the equilibrium concept are introduced. The existence of the equilibrium is established. In addition, the Nash equilibrium is shown to be a special case of the rational novice equilibrium. In another chapter, the rational novice model is applied to a voluntary contribution game. Numerical methods were used to obtain the solution. The model is estimated with data obtained from the Palfrey and Prisbrey experimental study of the voluntary contribution game. It is found that the rational novice model explains the data better than the Nash model. Although a formal statistical test was not used, pseudo R^2 analysis indicates that the rational novice model is better than a Probit model similar to the one used in the Palfrey and Prisbrey study.

The rational novice model is also applied to a first price sealed bid auction. Again, computing techniques were used to obtain a numerical solution. The data obtained from the Chen and Plott study were used to estimate the model. The rational novice model outperforms the CRRAM, the primary Nash model studied in the Chen and Plott study. However, the rational novice model is not the best amongst all models. A sophisticated rule-of-thumb, called the SOPAM, offers the best explanation of the data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Roughly one half of World's languages are in danger of extinction. The endangered languages, spoken by minorities, typically compete with powerful languages such as En- glish or Spanish. Consequently, the speakers of minority languages have to consider that not everybody can speak their language, converting the language choice into strategic,coordination-like situation. We show experimentally that the displacement of minority languages may be partially explained by the imperfect information about the linguistic type of the partner, leading to frequent failure to coordinate on the minority language even between two speakers who can and prefer to use it. The extent of miscoordination correlates with how minoritarian a language is and with the real-life linguistic condition of subjects: the more endangered a language the harder it is to coordinate on its use, and people on whom the language survival relies the most acquire behavioral strategies that lower its use. Our game-theoretical treatment of the issue provides a new perspective for linguistic policies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Power system is at the brink of change. Engineering needs, economic forces and environmental factors are the main drivers of this change. The vision is to build a smart electrical grid and a smarter market mechanism around it to fulfill mandates on clean energy. Looking at engineering and economic issues in isolation is no longer an option today; it needs an integrated design approach. In this thesis, I shall revisit some of the classical questions on the engineering operation of power systems that deals with the nonconvexity of power flow equations. Then I shall explore some issues of the interaction of these power flow equations on the electricity markets to address the fundamental issue of market power in a deregulated market environment. Finally, motivated by the emergence of new storage technologies, I present an interesting result on the investment decision problem of placing storage over a power network. The goal of this study is to demonstrate that modern optimization and game theory can provide unique insights into this complex system. Some of the ideas carry over to applications beyond power systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis consists of three essays in the areas of political economy and game theory, unified by their focus on the effects of pre-play communication on equilibrium outcomes.

Communication is fundamental to elections. Chapter 2 extends canonical voter turnout models, where citizens, divided into two competing parties, choose between costly voting and abstaining, to include any form of communication, and characterizes the resulting set of Aumann's correlated equilibria. In contrast to previous research, high-turnout equilibria exist in large electorates and uncertain environments. This difference arises because communication can coordinate behavior in such a way that citizens find it incentive compatible to follow their correlated signals to vote more. The equilibria have expected turnout of at least twice the size of the minority for a wide range of positive voting costs.

In Chapter 3 I introduce a new equilibrium concept, called subcorrelated equilibrium, which fills the gap between Nash and correlated equilibrium, extending the latter to multiple mediators. Subcommunication equilibrium similarly extends communication equilibrium for incomplete information games. I explore the properties of these solutions and establish an equivalence between a subset of subcommunication equilibria and Myerson's quasi-principals' equilibria. I characterize an upper bound on expected turnout supported by subcorrelated equilibrium in the turnout game.

Chapter 4, co-authored with Thomas Palfrey, reports a new study of the effect of communication on voter turnout using a laboratory experiment. Before voting occurs, subjects may engage in various kinds of pre-play communication through computers. We study three communication treatments: No Communication, a control; Public Communication, where voters exchange public messages with all other voters, and Party Communication, where messages are exchanged only within one's own party. Our results point to a strong interaction effect between the form of communication and the voting cost. With a low voting cost, party communication increases turnout, while public communication decreases turnout. The data are consistent with correlated equilibrium play. With a high voting cost, public communication increases turnout. With communication, we find essentially no support for the standard Nash equilibrium turnout predictions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tese se insere no conjunto de pesquisas que procura entender como funcionam as eleições no Brasil. Especificamente, o objetivo é investigar a propaganda negativa durante as eleições presidenciais. Para tal foram desenvolvidos cinco capítulos. O primeiro situa o leitor no debate normativo sobre o papel da propaganda negativa para a democracia eleitoral. Nele, é debatida a importância dos ataques em uma série de circunstâncias, como mobilização política, ambiente informacional e decisão do voto. O segundo capítulo constitui ampla análise do conteúdo da propaganda negativa exibida no âmbito do Horário Gratuito de Propaganda Eleitoral durante as eleições presidenciais de 1989, 1994, 1998, 2002, 2006 e 2010, primeiro e segundo turnos. A metodologia seguiu as orientações formuladas por Figueiredo et all. (1998), mas adaptadas para as especificidades da propaganda negativa. Neste objetivo, tendências interessantes foram descobertas, a mais interessante, sem dúvida, é o baixo índice de ataques ocorrido entre os candidatos. O terceiro busca investigar o uso estratégico das inserções durante as campanhas presidenciais. Debato o caráter regulamentado do modelo brasileiro de propaganda. Ainda assim, aponto estratégias divergentes no uso estratégico das inserções negativas, sendo o horário noturno o lócus predominante dos ataques. O quarto capítulo procura criar um modelo de campanha negativa com base na teoria dos jogos. No modelo, procuro responder às seguintes questões: quem ataca quem, quando e por quê? Argumento que a propaganda negativa é o último recurso utilizado pelos candidatos na conquista por votos. Ela tem como propósito central alterar a tendência do adversário. Por essa razão, é utilizada principalmente por candidatos em situações de desvantagem nos índices de intenção de voto. O quinto e último capítulo desenvolve modelo estatístico para medir o impacto da propaganda negativa nos índices de intenção de voto.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A presente dissertação propõe uma abordagem alternativa na simulação matemática de um cenário preocupante em ecologia: o controle de pragas nocivas a uma dada lavoura de soja em uma específica região geográfica. O instrumental teórico empregado é a teoria dos jogos, de forma a acoplar ferramentas da matemática discreta à análise e solução de problemas de valor inicial em equações diferenciais, mais especificamente, as chamadas equações de dinâmica populacional de Lotka-Volterra com competição. Essas equações, que modelam o comportamento predador-presa, possuem, com os parâmetros inicialmente utilizados, um ponto de equilíbrio mais alto que o desejado no contexto agrícola sob exame, resultando na necessidade de utilização da teoria do controle ótimo. O esquema desenvolvido neste trabalho conduz a ferramentas suficientemente simples, de forma a tornar viável o seu uso em situações reais. Os dados utilizados para o tratamento do problema que conduziu a esta pesquisa interdisciplinar foram coletados de material bibliográfico da Empresa Brasileira de Pesquisa Agropecuária EMBRAPA.