887 resultados para global convergence to nash equilibria
Resumo:
In this work we give su±cient conditions for k-th approximations of the polynomial roots of f(x) when the Maehly{Aberth{Ehrlich, Werner-Borsch-Supan, Tanabe, Improved Borsch-Supan iteration methods fail on the next step. For these methods all non-attractive sets are found. This is a subsequent improvement of previously developed techniques and known facts. The users of these methods can use the results presented here for software implementation in Distributed Applications and Simulation Environ- ments. Numerical examples with graphics are shown.
Resumo:
2000 Mathematics Subject Classification: 60G18, 60E07
Resumo:
This article examines the transformation in the narratives of the international governance of security over the last two decades. It suggests that there has been a major shift from governing interventions designed to address the causes of security problems to the regulation of the effects of these problems. In rearticulating the goals of international actors, the means and mechanisms of security governance have also changed, no longer focused on the universal application of Western knowledge and resources but rather on the unique local and organic processes at work in societies that bear the brunt of these problems. This transformation takes the conceptualisation of security governance out of the traditional terminological lexicon of security expertise and universal solutions and instead articulates the problematic of security and the policing of global risks in terms of local management processes, suggesting that decentralised coping strategies and self-policing are more effective and sustainable solutions.
Resumo:
As part of the Prato Collaborative I am undertaking a Delphi Study to explore the developmental journeys that nine different countries (including NI and Ireland) have undertaken to better meet the needs of families where a parent has a mental illness in adult mental health and children’s services. This research has potential to impact FFP in adult mental health and children's services.
Resumo:
In the vein of the "Education for All" campaign to promote access to education, a wave of curriculum revision along the competency-based approach has swept francophone countries in sub-Sahara Africa, thus Benin. The current study documents local actors' various interactions with the curricular reform in the course of its implementation. Secondary data supplemented with qualitative research techniques such as semi-structured interviews with teachers, and focus group discussions with parents enable to relate the patterns of change, the challenges and resistance to change. The actors spectrum generated illustrates advocacy on one hand and resistance on the other. Advocacy of local actors reflects the global optimistic discourse on education and resistance is favoured by disappointing policy outcomes as well as contextual constraints. (DIPF/Orig.)
Resumo:
The article offers a systematic analysis of the comparative trajectory of international democratic change. In particular, it focuses on the resulting convergence or divergence of political systems, borrowing from the literatures on institutional change and policy convergence. To this end, political-institutional data in line with Arend Lijphart’s (1999, 2012) empirical theory of democracy for 24 developed democracies between 1945 and 2010 are analyzed. Heteroscedastic multilevel models allow for directly modeling the development of the variance of types of democracy over time, revealing information about convergence, and adding substantial explanations. The findings indicate that there has been a trend away from extreme types of democracy in single cases, but no unconditional trend of convergence can be observed. However, there are conditional processes of convergence. In particular, economic globalization and the domestic veto structure interactively influence democratic convergence.
Resumo:
The image reconstruction using the EIT (Electrical Impedance Tomography) technique is a nonlinear and ill-posed inverse problem which demands a powerful direct or iterative method. A typical approach for solving the problem is to minimize an error functional using an iterative method. In this case, an initial solution close enough to the global minimum is mandatory to ensure the convergence to the correct minimum in an appropriate time interval. The aim of this paper is to present a new, simple and low cost technique (quadrant-searching) to reduce the search space and consequently to obtain an initial solution of the inverse problem of EIT. This technique calculates the error functional for four different contrast distributions placing a large prospective inclusion in the four quadrants of the domain. Comparing the four values of the error functional it is possible to get conclusions about the internal electric contrast. For this purpose, initially we performed tests to assess the accuracy of the BEM (Boundary Element Method) when applied to the direct problem of the EIT and to verify the behavior of error functional surface in the search space. Finally, numerical tests have been performed to verify the new technique.
Resumo:
The left ventricular response to dobutamine may be quantified using tissue Doppler measurement of myocardial velocity or displacement or 3-dimensional echocardiography to measure ventricular volume and ejection fraction. This study sought to explore the accuracy of these methods for predicting segmental and global responses to therapy. Standard dobutamine and 3-dimensional echocardiography were performed in 92 consecutive patients with abnormal left ventricular function at rest. Recovery of function was defined by comparison with follow-up echocardiography at rest 5 months later. Segments that showed improved regional function at follow-up showed a higher increment in peak tissue Doppler velocity with dobutamine therapy than in nonviable segments (1.2 +/- 0.4 vs 0.3 +/- 0.2 cm/s, p = 0.001). Similarly, patients who showed a > 5% improvement of ejection fraction at follow-up showed a greater displacement response to dobutamine (6.9 +/- 3.2 vs 2.1 +/- 2.3 mm, p = 0.001), as well as a higher rate of ejection fraction, response to dobutamine (9 +/- 3% vs 2 +/- 2%, p = 0.001). The optimal cutoff values for predicting subsequent recovery of function at rest were an increment of peak velocity > 1 cm/s, >5 mm of displacement, and a >5% improvement of ejection fraction with low-dose dobutamine. (C) 2003 by Excerpta Medica, Inc.
Resumo:
We present deterministic dynamics on the production costs of Cournot competitions, based on perfect Nash equilibria of nonlinear R&D investment strategies to reduce the production costs of the firms at every period of the game. We analyse the effects that the R&D investment strategies can have in the profits of the firms along the time. We show that small changes in the initial production costs or small changes in the parameters that determine the efficiency of the R&D programs or of the firms can produce strong economic effects in the long run of the profits of the firms.
Resumo:
We present a new deterministic dynamical model on the market size of Cournot competitions, based on Nash equilibria of R&D investment strategies to increase the size of the market of the firms at every period of the game. We compute the unique Nash equilibrium for the second subgame and the profit functions for both firms. Adding uncertainty to the R&D investment strategies, we get a new stochastic dynamical model and we analyse the importance of the uncertainty to reverse the initial advantage of one firm with respect to the other.
Resumo:
We report experiments designed to test between Nash equilibria that are stable and unstable under learning. The “TASP” (Time Average of the Shapley Polygon) gives a precise prediction about what happens when there is divergence from equilibrium under fictitious play like learning processes. We use two 4 x 4 games each with a unique mixed Nash equilibrium; one is stable and one is unstable under learning. Both games are versions of Rock-Paper-Scissors with the addition of a fourth strategy, Dumb. Nash equilibrium places a weight of 1/2 on Dumb in both games, but the TASP places no weight on Dumb when the equilibrium is unstable. We also vary the level of monetary payoffs with higher payoffs predicted to increase instability. We find that the high payoff unstable treatment differs from the others. Frequency of Dumb is lower and play is further from Nash than in the other treatments. That is, we find support for the comparative statics prediction of learning theory, although the frequency of Dumb is substantially greater than zero in the unstable treatments.
Resumo:
We report results from an experiment that explores the empirical validity of correlated equilibrium, an important generalization of the Nash equilibrium concept. Specifically, we seek to understand the conditions under which subjects playing the game of Chicken will condition their behavior on private, third–party recommendations drawn from known distributions. In a “good–recommendations” treatment, the distribution we use is a correlated equilibrium with payoffs better than any symmetric payoff in the convex hull of Nash equilibrium payoff vectors. In a “bad–recommendations” treatment, the distribution is a correlated equilibrium with payoffs worse than any Nash equilibrium payoff vector. In a “Nash–recommendations” treatment, the distribution is a convex combination of Nash equilibrium outcomes (which is also a correlated equilibrium), and in a fourth “very–good–recommendations” treatment, the distribution yields high payoffs, but is not a correlated equilibrium. We compare behavior in all of these treatments to the case where subjects do not receive recommendations. We find that when recommendations are not given to subjects, behavior is very close to mixed–strategy Nash equilibrium play. When recommendations are given, behavior does differ from mixed–strategy Nash equilibrium, with the nature of the differ- ences varying according to the treatment. Our main finding is that subjects will follow third–party recommendations only if those recommendations derive from a correlated equilibrium, and further, if that correlated equilibrium is payoff–enhancing relative to the available Nash equilibria.
Resumo:
What's the role of unilateral measures in global climate change mitigation in a post-Durban, post 2012 global policy regime? We argue that under conditions of preference heterogeneity, unilateral emissions mitigation at a subnational level may exist even when a nation is unwilling to commit to emission cuts. As the fraction of individuals unilaterally cutting emissions in a global strongly connected network of countries evolves over time, learning the costs of cutting emissions can result in the adoption of such activities globally and we establish that this will indeed happen under certain assumptions. We analyze the features of a policy proposal that could accelerate convergence to a low carbon world in the presence of global learning.
Resumo:
In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems. In this setting, recent works have shown how to get a statistics of extremes in agreement with the classical Extreme Value Theory. We pursue these investigations by giving analytical expressions of Extreme Value distribution parameters for maps that have an absolutely continuous invariant measure. We compare these analytical results with numerical experiments in which we study the convergence to limiting distributions using the so called block-maxima approach, pointing out in which cases we obtain robust estimation of parameters. In regular maps for which mixing properties do not hold, we show that the fitting procedure to the classical Extreme Value Distribution fails, as expected. However, we obtain an empirical distribution that can be explained starting from a different observable function for which Nicolis et al. (Phys. Rev. Lett. 97(21): 210602, 2006) have found analytical results.
Resumo:
The ability to run General Circulation Models (GCMs) at ever-higher horizontal resolutions has meant that tropical cyclone simulations are increasingly credible. A hierarchy of atmosphere-only GCMs, based on the Hadley Centre Global Environmental Model (HadGEM1), with horizontal resolution increasing from approximately 270km to 60km (at 50N), is used to systematically investigate the impact of spatial resolution on the simulation of global tropical cyclone activity, independent of model formulation. Tropical cyclones are extracted from ensemble simulations and reanalyses of comparable resolutions using a feature-tracking algorithm. Resolution is critical for simulating storm intensity and convergence to observed storm intensities is not achieved with the model hierarchy. Resolution is less critical for simulating the annual number of tropical cyclones and their geographical distribution, which are well captured at resolutions of 135km or higher, particularly for Northern Hemisphere basins. Simulating the interannual variability of storm occurrence requires resolutions of 100km or higher; however, the level of skill is basin dependent. Higher resolution GCMs are increasingly able to capture the interannual variability of the large-scale environmental conditions that contribute to tropical cyclogenesis. Different environmental factors contribute to the interannual variability of tropical cyclones in the different basins: in the North Atlantic basin the vertical wind shear, potential intensity and low-level absolute vorticity are dominant, while in the North Pacific basins mid-level relative humidity and low-level absolute vorticity are dominant. Model resolution is crucial for a realistic simulation of tropical cyclone behaviour, and high-resolution GCMs are found to be valuable tools for investigating the global location and frequency of tropical cyclones.