999 resultados para concurrent game structures
Resumo:
Los tipos de datos concurrentes son implementaciones concurrentes de las abstracciones de datos clásicas, con la diferencia de que han sido específicamente diseñados para aprovechar el gran paralelismo disponible en las modernas arquitecturas multiprocesador y multinúcleo. La correcta manipulación de los tipos de datos concurrentes resulta esencial para demostrar la completa corrección de los sistemas de software que los utilizan. Una de las mayores dificultades a la hora de diseñar y verificar tipos de datos concurrentes surge de la necesidad de tener que razonar acerca de un número arbitrario de procesos que invocan estos tipos de datos de manera concurrente. Esto requiere considerar sistemas parametrizados. En este trabajo estudiamos la verificación formal de propiedades temporales de sistemas concurrentes parametrizados, poniendo especial énfasis en programas que manipulan estructuras de datos concurrentes. La principal dificultad a la hora de razonar acerca de sistemas concurrentes parametrizados proviene de la interacción entre el gran nivel de concurrencia que éstos poseen y la necesidad de razonar al mismo tiempo acerca de la memoria dinámica. La verificación de sistemas parametrizados resulta en sí un problema desafiante debido a que requiere razonar acerca de estructuras de datos complejas que son accedidas y modificadas por un numero ilimitado de procesos que manipulan de manera simultánea el contenido de la memoria dinámica empleando métodos de sincronización poco estructurados. En este trabajo, presentamos un marco formal basado en métodos deductivos capaz de ocuparse de la verificación de propiedades de safety y liveness de sistemas concurrentes parametrizados que manejan estructuras de datos complejas. Nuestro marco formal incluye reglas de prueba y técnicas especialmente adaptadas para sistemas parametrizados, las cuales trabajan en colaboración con procedimientos de decisión especialmente diseñados para analizar complejas estructuras de datos concurrentes. Un aspecto novedoso de nuestro marco formal es que efectúa una clara diferenciación entre el análisis del flujo de control del programa y el análisis de los datos que se manejan. El flujo de control del programa se analiza utilizando reglas de prueba y técnicas de verificación deductivas especialmente diseñadas para lidiar con sistemas parametrizados. Comenzando a partir de un programa concurrente y la especificación de una propiedad temporal, nuestras técnicas deductivas son capaces de generar un conjunto finito de condiciones de verificación cuya validez implican la satisfacción de dicha especificación temporal por parte de cualquier sistema, sin importar el número de procesos que formen parte del sistema. Las condiciones de verificación generadas se corresponden con los datos manipulados. Estudiamos el diseño de procedimientos de decisión especializados capaces de lidiar con estas condiciones de verificación de manera completamente automática. Investigamos teorías decidibles capaces de describir propiedades de tipos de datos complejos que manipulan punteros, tales como implementaciones imperativas de pilas, colas, listas y skiplists. Para cada una de estas teorías presentamos un procedimiento de decisión y una implementación práctica construida sobre SMT solvers. Estos procedimientos de decisión son finalmente utilizados para verificar de manera automática las condiciones de verificación generadas por nuestras técnicas de verificación parametrizada. Para concluir, demostramos como utilizando nuestro marco formal es posible probar no solo propiedades de safety sino además de liveness en algunas versiones de protocolos de exclusión mutua y programas que manipulan estructuras de datos concurrentes. El enfoque que presentamos en este trabajo resulta ser muy general y puede ser aplicado para verificar un amplio rango de tipos de datos concurrentes similares. Abstract Concurrent data types are concurrent implementations of classical data abstractions, specifically designed to exploit the great deal of parallelism available in modern multiprocessor and multi-core architectures. The correct manipulation of concurrent data types is essential for the overall correctness of the software system built using them. A major difficulty in designing and verifying concurrent data types arises by the need to reason about any number of threads invoking the data type simultaneously, which requires considering parametrized systems. In this work we study the formal verification of temporal properties of parametrized concurrent systems, with a special focus on programs that manipulate concurrent data structures. The main difficulty to reason about concurrent parametrized systems comes from the combination of their inherently high concurrency and the manipulation of dynamic memory. This parametrized verification problem is very challenging, because it requires to reason about complex concurrent data structures being accessed and modified by threads which simultaneously manipulate the heap using unstructured synchronization methods. In this work, we present a formal framework based on deductive methods which is capable of dealing with the verification of safety and liveness properties of concurrent parametrized systems that manipulate complex data structures. Our framework includes special proof rules and techniques adapted for parametrized systems which work in collaboration with specialized decision procedures for complex data structures. A novel aspect of our framework is that it cleanly differentiates the analysis of the program control flow from the analysis of the data being manipulated. The program control flow is analyzed using deductive proof rules and verification techniques specifically designed for coping with parametrized systems. Starting from a concurrent program and a temporal specification, our techniques generate a finite collection of verification conditions whose validity entails the satisfaction of the temporal specification by any client system, in spite of the number of threads. The verification conditions correspond to the data manipulation. We study the design of specialized decision procedures to deal with these verification conditions fully automatically. We investigate decidable theories capable of describing rich properties of complex pointer based data types such as stacks, queues, lists and skiplists. For each of these theories we present a decision procedure, and its practical implementation on top of existing SMT solvers. These decision procedures are ultimately used for automatically verifying the verification conditions generated by our specialized parametrized verification techniques. Finally, we show how using our framework it is possible to prove not only safety but also liveness properties of concurrent versions of some mutual exclusion protocols and programs that manipulate concurrent data structures. The approach we present in this work is very general, and can be applied to verify a wide range of similar concurrent data types.
Resumo:
Human operators are unique in their decision making capability, judgment and nondeterminism. Their sense of judgment, unpredictable decision procedures, susceptibility to environmental elements can cause them to erroneously execute a given task description to operate a computer system. Usually, a computer system is protected against some erroneous human behaviors by having necessary safeguard mechanisms in place. But some erroneous human operator behaviors can lead to severe or even fatal consequences especially in safety critical systems. A generalized methodology that can allow modeling and analyzing the interactions between computer systems and human operators where the operators are allowed to deviate from their prescribed behaviors will provide a formal understanding of the robustness of a computer system against possible aberrant behaviors by its human operators. We provide several methodology for assisting in modeling and analyzing human behaviors exhibited while operating computer systems. Every human operator is usually given a specific recommended set of guidelines for operating a system. We first present process algebraic methodology for modeling and verifying recommended human task execution behavior. We present how one can perform runtime monitoring of a computer system being operated by a human operator for checking violation of temporal safety properties. We consider the concept of a protection envelope giving a wider class of behaviors than those strictly prescribed by a human task that can be tolerated by a system. We then provide a framework for determining whether a computer system can maintain its guarantees if the human operators operate within their protection envelopes. This framework also helps to determine the robustness of the computer system under weakening of the protection envelopes. In this regard, we present a tool called Tutela that assists in implementing the framework. We then examine the ability of a system to remain safe under broad classes of variations of the prescribed human task. We develop a framework for addressing two issues. The first issue is: given a human task specification and a protection envelope, will the protection envelope properties still hold under standard erroneous executions of that task by the human operators? In other words how robust is the protection envelope? The second issue is: in the absence of a protection envelope, can we approximate a protection envelope encompassing those standard erroneous human behaviors that can be safely endured by the system? We present an extension of Tutela that implements this framework. The two frameworks mentioned above use Concurrent Game Structures (CGS) as models for both computer systems and their human operators. However, there are some shortcomings of this formalism for our uses. We add incomplete information concepts in CGSs to achieve better modularity for the players. We introduce nondeterminism in both the transition system and strategies of players and in the modeling of human operators and computer systems. Nondeterministic action strategies for players in \emph{i}ncomplete information \emph{N}ondeterministic CGS (iNCGS) is a more precise formalism for modeling human behaviors exhibited while operating a computer system. We show how we can reason about a human behavior satisfying a guarantee by providing a semantics of Alternating Time Temporal Logic based on iNCGS player strategies. In a nutshell this dissertation provides formal methodology for modeling and analyzing system robustness against both expected and erroneous human operator behaviors.
Resumo:
Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.
Resumo:
Pour respecter les droits auteur, la version electronique de cette thèse a été dépouillée de ses documents visuels et audio-visuels. La version intégrale de la thèse a été déposée au Service de la gestion des documents et des archives de l'Université de Montréal.
Resumo:
Pós-graduação em Medicina Veterinária - FMVZ
Resumo:
Session 7: Playing with Roles, images and improvising New States of Awareness, 3rd Global Conference, 1st November – 3rd November, 2014, Prague, Czech Republic.
Resumo:
This paper presents a method of formally specifying, refining and verifying concurrent systems which uses the object-oriented state-based specification language Object-Z together with the process algebra CSP. Object-Z provides a convenient way of modelling complex data structures needed to define the component processes of such systems, and CSP enables the concise specification of process interactions. The basis of the integration is a semantics of Object-Z classes identical to that of CSP processes. This allows classes specified in Object-Z to he used directly within the CSP part of the specification. In addition to specification, we also discuss refinement and verification in this model. The common semantic basis enables a unified method of refinement to be used, based upon CSP refinement. To enable state-based techniques to be used fur the Object-Z components of a specification we develop state-based refinement relations which are sound and complete with respect to CSP refinement. In addition, a verification method for static and dynamic properties is presented. The method allows us to verify properties of the CSP system specification in terms of its component Object-Z classes by using the laws of the the CSP operators together with the logic for Object-Z.
Resumo:
Com o aumento de plataformas móveis disponíveis no mercado e com o constante incremento na sua capacidade computacional, a possibilidade de executar aplicações e em especial jogos com elevados requisitos de desempenho aumentou consideravelmente. O mercado dos videojogos tem assim um cada vez maior número de potenciais clientes. Em especial, o mercado de jogos massive multiplayer online (MMO) tem-se tornado muito atractivo para as empresas de desenvolvimento de jogos. Estes jogos suportam uma elevada quantidade de jogadores em simultâneo que podem estar a executar o jogo em diferentes plataformas e distribuídos por um "mundo" de jogo extenso. Para incentivar a exploração desse "mundo", distribuem-se de forma inteligente pontos de interesse que podem ser explorados pelo jogador. Esta abordagem leva a um esforço substancial no planeamento e construção desses mundos, gastando tempo e recursos durante a fase de desenvolvimento. Isto representa um problema para as empresas de desenvolvimento de jogos, e em alguns casos, e impraticável suportar tais custos para equipas indie. Nesta tese e apresentada uma abordagem para a criação de mundos para jogos MMO. Estudam-se vários jogos MMO que são casos de sucesso de modo a identificar propriedades comuns nos seus mundos. O objectivo e criar uma framework flexível capaz de gerar mundos com estruturas que respeitam conjuntos de regras definidas por game designers. Para que seja possível usar a abordagem aqui apresentada em v arias aplicações diferentes, foram desenvolvidos dois módulos principais. O primeiro, chamado rule-based-map-generator, contem a lógica e operações necessárias para a criação de mundos. O segundo, chamado blocker, e um wrapper à volta do módulo rule-based-map-generator que gere as comunicações entre servidor e clientes. De uma forma resumida, o objectivo geral e disponibilizar uma framework para facilitar a geração de mundos para jogos MMO, o que normalmente e um processo bastante demorado e aumenta significativamente o custo de produção, através de uma abordagem semi-automática combinando os benefícios de procedural content generation (PCG) com conteúdo gráfico gerado manualmente.
Resumo:
We analyze the incentives for cooperation of three players differing in their efficiency of effort in a contest game. We concentrate on the non-cooperative bargaining foundation of coalition formation, and therefore, we adopt a two-stage model. In the first stage, individuals form coalitions following a bargaining protocol similar to the one proposed by Gul (1989). Afterwards, coalitions play the contest game of Esteban and Ray (1999) within the resulting coalition structure of the first stage. We find that the grand coalition forms whenever the distribution of the bargaining power in the coalition formation game is equal to the distribution of the relative efficiency of effort. Finally, we use the case of equal bargaining power for all individuals to show that other types of coalition structures may be observed as well.
Resumo:
Qin [J. Eco. Th., 1996] recently showed that in a game of endogenous formation of cooperation structure, if the underlying TU-game is superadditive, then the full cooperation structure is stable. In this note, we characterize the class of games that ensure the stability of the full cooperation structure, and show that this class is much larger than that of superadditive TU-games.
Resumo:
This paper studies the stability of a finite local public goods economy in horizontal differentiation, where a jurisdiction's choice of the public good is given by an exogenous decision scheme. In this paper, we characterize the class of decision schemes that ensure the existence of an equilibrium with free mobility (that we call Tiebout equilibrium) for monotone distribution of players. This class contains all the decision schemes whose choice lies between the Rawlsian decision scheme and the median voter with mid-distance of the two median voters when there are ties. We show that for non-monotone distribution, there is no decision scheme that can ensure the stability of coalitions. In the last part of the paper, we prove the non-emptiness of the core of this coalition formation game
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
The Western Alpine Are has been created during the Cretaceous and the Tertiary orogenies. The interference patterns of the Tertiary structures suggest their formation during continental collision of the European and the Adriatic Plates, with an accompanying anticlockwise rotation of the Adriatic indenter. Extensional structures are mainly related to ductile deformation by simple shear. These structures developed at a deep tectonic level, in granitic crustal rocks, at depths in excess of 10 km. In the early Palaeogene period of the Tertiary Orogeny, the main Tertiary nappe emplacement resulted from a NW-thrusting of the Austroalpine, Penninic and Helvetic nappes. Heating of the deep zone of the Upper Cretaceous and Tertiary nappe stack by geothermal heat flow is responsible for the Tertiary regional metamorphism, reaching amphibolite-facies conditions in the Lepontine Gneiss Dome (geothermal gradient 25 degrees C/ km). The Tertiary thrusting occurred mainly during prograde metamorphic conditions with creation of a penetrative NW-SE-oriented stretching lineation, X(1) (finite extension), parallel to the direction of simple shear. Earliest cooling after the culmination of the Tertiary metamorphism, some 38 Ma ago, is recorded by the cooling curves of the Monte Rosa and Mischabel nappes to the west and the Suretta Nappe to the east of the Lepontine Gneiss Dome. The onset of dextral transpression, with a strong extension parallel to the mountain belt, and the oldest S-vergent `'backfolding'' took place some 35 to 30 Ma ago during retrograde amphibolite-facies conditions and before the intrusion of the Oligocene dikes north of the Periadriatic Line. The main updoming of the Lepontine Gneiss Dome started some 32-30 Ma ago with the intrusion of the Bergell tonalites and granodiorites, concomitant with S-vergent backfolding and backthrusting and dextral strike-slip movements along the Tonale and Canavese Lines (Argand's Insubric phase). Subsequently, the center of main updoming migrated slowly to the west, reaching the Simplon region some 20 Ma ago. This was contemporaneous with the westward migration of the Adriatic indenter. Between 20 Ma and the present, the Western Aar Massif-Toce culmination was the center of strong uplift. The youngest S-vergent backfolds, the Glishorn anticline and the Berisal syncline fold the 12 Ma Rb/Sr biotite isochron and are cut by the 11 Ma old Rhone-Simplon Line. The discrete Rhone-Simplon Line represents a late retrograde manifestation in the preexisting ductile Simplon Shear Zone. This fault zone is still active today. The Oligocene-Neogene dextral transpression and extension in the Simplon area were concurrent with thrusting to the northwest of the Helvetic nappes, the Prealpes (35-15 Ma) and with the Jura thin-skinned thrust (11-3 Ma). It was also contemporaneous with thrusting to the south of the Bergamasc (> 35-5 Ma) and Milan thrusts (16-5 Ma).
Resumo:
Using data for all the fixtures for the seasons from 1972-73 to 2002-03, we estimate a dynamic model of demand for football pools in Spain paying attention to whether their main economic explanatory variable is the effective price of a ticket or the jackpot. Additionally, we evaluate the importance of the composition of the list of games in terms of whether First Division matches are included or not. Results show that the jackpot model is preferred to the effective price model, having important implications in terms of how the structure of the game should be changed in order to increase demand.