948 resultados para [JEL:C70] Mathematical and Quantitative Methods - Game Theory and Bargaining Theory - General


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many years ago Zel'dovich showed how the Lagrange condition in the theory of differential equations can be utilized in the perturbation theory of quantum mechanics. Zel'dovich's method enables us to circumvent the summation over intermediate states. As compared with other similar methods, in particular the logarithmic perturbation expansion method, we emphasize that this relatively unknown method of Zel'dovich has a remarkable advantage in dealing with excited stares. That is, the ground and excited states can all be treated in the same way. The nodes of the unperturbed wavefunction do not give rise to any complication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos últimos anos houve uma contribuição significativa dos físicos para a construção de um tipo de modelo baseado em agentes que busca reproduzir, em simulação computacional, o comportamento do mercado financeiro. Esse modelo, chamado Jogo da Minoria consiste de um grupo de agentes que vão ao mercado comprar ou vender ativos. Eles tomam decisões com base em estratégias e, por meio delas, os agentes estabelecem um intrincado jogo de competição e coordenação pela distribuição da riqueza. O modelo tem demonstrado resultados bastante ricos e surpreendentes, tanto na dinâmica do sistema como na capacidade de reproduzir características estatísticas e comportamentais do mercado financeiro. Neste artigo, são apresentadas a estrutura e a dinâmica do Jogo da Minoria, bem como as contribuições recentes relacionadas ao Jogo da Minoria denominado de Grande Canônico, que é um modelo mais bem ajustado às características do mercado financeiro e reproduz as regularidades estatísticas do preço dos ativos chamadas fatos estilizados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider arbitrary U (1) charged matter non-minimally coupled to the self-dual field in d = 2 + 1. The coupling includes a linear and a rather general quadratic term in the self-dual field. By using both Lagragian gauge embedding and master action approaches we derive the dual Maxwell Chern-Simons-type model and show the classical equivalence between the two theories. At the quantum level the master action approach in general requires the addition of an awkward extra term to the Maxwell Chern-Simons-type theory. Only in the case of a linear coupling in the self-dual field can the extra term be dropped and we are able to establish the quantum equivalence of gauge invariant correlation functions in both theories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work a Nonzero-Sum NASH game related to the H2 and H∞ control problems is formulated in the context of convex optimization theory. The variables of the game are limiting bounds for the H2 and H∞ norms, and the final controller is obtained as an equilibrium solution, which minimizes the `sensitivity of each norm' with respect to the other. The state feedback problem is considered and illustrated by numerical examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research is to contribute to the literature on organizational demography and new product development by investigating how diverse individual career histories impact team performance. Moreover we highlighted the importance of considering also the institutional context and the specific labour market arrangements in which a team is embedded, in order to interpret correctly the effect of career-related diversity measures on performance. The empirical setting of the study is the videogame industry, and the teams in charge of the development of new game titles. Video games development teams are the ideal setting to investigate the influence of career histories on team performance, since the development of videogames is performed by multidisciplinary teams composed by specialists with a wide variety of technical and artistic backgrounds, who execute a significant amounts of creative thinking. We investigate our research question both with quantitative methods and with a case study on the Japanese videogame industry: one of the most innovative in this sector. Our results show how career histories in terms of occupational diversity, prior functional diversity and prior product diversity, usually have a positive influence on team performance. However, when the moderating effect of the institutional setting is taken in to account, career diversity has different or even opposite effect on team performance, according to the specific national context in which a team operates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

What motivates students to perform and pursue engineering design tasks? This study examines this question by way of three Learning Through Service (LTS) programs: 1) an on-going longitudinal study examining the impacts of service on engineering students, 2) an on-going analysis of an international senior design capstone program, and 3) an on-going evaluation of an international graduate-level research program. The evaluation of these programs incorporates both qualitative and quantitative methods, utilizing surveys, questionnaires, and interviews, which help to provide insight on what motivates students to do engineering design work. The quantitative methods were utilized in analyzing various instruments including: a Readiness assessment inventory, Intercultural Development Inventory, Sustainable Engineering through Service Learning survey, the Impacts of Service on Engineering Students’ survey, Motivational narratives, as well as some analysis for interview text. The results of these instruments help to provide some much needed insight on how prepared students are to participate in engineering programs. Additional qualitative methods include: Word clouds, Motivational narratives, as well as interview analysis. This thesis focused on how these instruments help to determine what motivates engineering students to pursue engineering design tasks. These instruments aim to collect some more in-depth information than the quantitative instruments will allow. Preliminary results suggest that of the 120 interviews analyzed Interest/Enjoyment, Application of knowledge and skills, as well as gaining knowledge are key motivating factors regardless of gender or academic level. Together these findings begin to shed light on what motivates students to perform engineering design tasks, which can be applied for better recruitment and retention in university programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Erosion of dentine causes mineral dissolution, while the organic compounds remain at the surface. Therefore, a determination of tissue loss is complicated. Established quantitative methods for the evaluation of enamel have also been used for dentine, but the suitability of these techniques in this field has not been systematically determined. Therefore, this study aimed to compare longitudinal microradiography (LMR), contacting (cPM) and non-contacting profilometry (ncPM), and analysis of dissolved calcium (Ca analysis) in the erosion solution. Results are discussed in the light of the histology of dentine erosion. Erosion was performed with 0.05 M citric acid (pH 2.5) for 30, 60, 90 or 120 min, and erosive loss was determined by each method. LMR, cPM and ncPM were performed before and after collagenase digestion of the demineralised organic surface layer, with an emphasis on moisture control. Scanning electron microscopy was performed on randomly selected specimens. All measurements were converted into micrometres. Profilometry was not suitable to adequately quantify mineral loss prior to collagenase digestion. After 120 min of erosion, values of 5.4 +/- 1.9 microm (ncPM) and 27.8 +/- 4.6 microm (cPM) were determined. Ca analysis revealed a mineral loss of 55.4 +/- 11.5 microm. The values for profilometry after matrix digestion were 43.0 +/- 5.5 microm (ncPM) and 46.9 +/- 6.2 (cPM). Relative and proportional biases were detected for all method comparisons. The mineral loss values were below the detection limit for LMR. The study revealed gross differences between methods, particularly when demineralised organic surface tissue was present. These results indicate that the choice of method is critical and depends on the parameter under study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wer die auf Geistes- und Sozialwissenschaften basierende Literatur aus dem Kanon der Geschlechtertheorie betrachtet, erhält den Eindruck, dass die Psychologie innerhalb dieses Forschungsbereichs keine tragende Rolle spielt. Ein möglicher Grund für die fehlende Integration psychologischer Forschung scheint ihr Zugriff auf quantitative empirische Methoden zu sein, ein Ansatz, der für die naturwissenschaftlich orientierte psychologische Forschung zentral ist. In diesem Artikel wollen wir eine Lanze brechen für eine geschlechter theoretisch informierte quantitative Experimentalpsychologie. Anhand unseres Forschungsgebietes Psychologie der Sprache illustrieren wir, an welchen Punkten die neueren behavioralen und neurowissenschaftlichen Methoden einen Beitrag leisten können und wie sie Erkenntnisse aus der qualitativen Genderforschung komplementieren. Der erste Teil befasst sich mit aktuellen Studien, die unter anderem mit Reaktionszeitmessungen und evozierten Potenzialen zeigen, wie stark Genderstereotypien in der Semantik verankert sind. Der zweite Teil thematisiert neuere Befunde aus der Neurobildgebung, die Geschlechtsunterschiede in der Lateralisierung von Sprachverarbeitung infrage stellen. Abschließend skizzieren wir neuere Forschungsansätze und plädieren für eine transdiziplinäre Kombination von qualitativen und quantitativen Methoden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the social loss function cannot serve as a direct loss function for the central bank. Accordingly, we employ implementation theory to design a central bank loss function (mechanism design) with consistent targets, while the social loss function serves as a social welfare criterion. That is, with the correct mechanism design for the central bank loss function, optimal policy and consistent policy become identical. In other words, optimal policy proves implementable (consistent).