9 resultados para Take Two Interactive, strategia aziendale, videogiochi,
em Université de Lausanne, Switzerland
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
We present a novel hybrid (or multiphysics) algorithm, which couples pore-scale and Darcy descriptions of two-phase flow in porous media. The flow at the pore-scale is described by the Navier?Stokes equations, and the Volume of Fluid (VOF) method is used to model the evolution of the fluid?fluid interface. An extension of the Multiscale Finite Volume (MsFV) method is employed to construct the Darcy-scale problem. First, a set of local interpolators for pressure and velocity is constructed by solving the Navier?Stokes equations; then, a coarse mass-conservation problem is constructed by averaging the pore-scale velocity over the cells of a coarse grid, which act as control volumes; finally, a conservative pore-scale velocity field is reconstructed and used to advect the fluid?fluid interface. The method relies on the localization assumptions used to compute the interpolators (which are quite straightforward extensions of the standard MsFV) and on the postulate that the coarse-scale fluxes are proportional to the coarse-pressure differences. By numerical simulations of two-phase problems, we demonstrate that these assumptions provide hybrid solutions that are in good agreement with reference pore-scale solutions and are able to model the transition from stable to unstable flow regimes. Our hybrid method can naturally take advantage of several adaptive strategies and allows considering pore-scale fluxes only in some regions, while Darcy fluxes are used in the rest of the domain. Moreover, since the method relies on the assumption that the relationship between coarse-scale fluxes and pressure differences is local, it can be used as a numerical tool to investigate the limits of validity of Darcy's law and to understand the link between pore-scale quantities and their corresponding Darcy-scale variables.
Resumo:
Reliable quantification of the macromolecule signals in short echo-time H-1 MRS spectra is particularly important at high magnetic fields for an accurate quantification of metabolite concentrations (the neurochemical profile) due to effectively increased spectral resolution of the macromolecule components. The purpose of the present study was to assess two approaches of quantification, which take the contribution of macromolecules into account in the quantification step. H-1 spectra were acquired on a 14.1 T/26 cm horizontal scanner on five rats using the ultra-short echo-time SPECIAL (spin echo full intensity acquired localization) spectroscopy sequence. Metabolite concentrations were estimated using LCModel, combined with a simulated basis set of metabolites using published spectral parameters and either the spectrum of macromolecules measured in vivo, using an inversion recovery technique, or baseline simulated by the built-in spline function. The fitted spline function resulted in a smooth approximation of the in vivo macromolecules, but in accordance with previous studies using Subtract-QUEST could not reproduce completely all features of the in vivo spectrum of macromolecules at 14.1 T. As a consequence, the measured macromolecular 'baseline' led to a more accurate and reliable quantification at higher field strengths.
Resumo:
Autophagy is a cellular mechanism for degrading proteins and organelles. It was first described as a physiological process essential for maintaining homeostasis and cell survival, but understanding its role in conditions of stress has been complicated by the recognition of a new type of cell death ("type 2") characterized by deleterious autophagic activity. This paradox is important in the central nervous system where the activation of autophagy seems to be protective in certain neurodegenerative diseases but deleterious in cerebral ischemia. The development of new therapeutic strategies based on the manipulation of autophagy will need to take into account these opposing roles of autophagy.
Resumo:
Observing infants in triadic situations has revealed their triangular competence; namely, their ability to interact with both parents by simultaneously sharing their attention and affects with them. Infants' triangular interaction is linked with the coparenting unit's degree of coordination; in high-coordination (HC) families, parents act as a team in relation to the child, thus drawing clear and flexible boundaries with them; in low-coordination (LC) families, parents either avoid direct interaction with each other and include the child in their unit or join together against the child and exclude him or her, thus drawing inconsistent boundaries with the child. We explored the interactive strategies of LC 9-month-olds (n = 15) with those of their parents, comparing them with HC parents (n = 23) in two conditions: playing with both parents at the same time and witnessing their parents' dialogue. LC infants' affects were less positive; they addressed fewer positive triangular bids to their parents and tended to use a less triangular interactive mode. Thus, LC infants had fewer opportunities than did HC infants to acquire skills necessary for coping with triangular interaction. L'observation de nourrissons dans des situations triadiques a révélé leur compétence triangulaire, c'est-à-dire la capacité à interagir avec les deux parents en partageant simultanément leur attention et leurs affects avec eux. L'interaction triangulaire des nourrissons est liée au degré de coordination de l'unité de coparentage. Dans les familles à coordination élevée (abrégé HC en anglais, CE en français), les parents agissent en relation à l'enfant en tant qu'équipe, et établissent donc des limites claires et flexibles avec les enfants. Dans les familles à coordination faible (abrégé LC en anglais, CF en français), les parents évitent soit l'interaction directe l'un avec l'autre et incluent l'enfant dans leur unité, ou bien ils se liguent contre l'enfant et l'excluent, établissant donc des limites contradictoires avec l'enfant. Nous explorons les stratégies interactives de bébés de 9 mois CF avec celle de leurs parents, en les comparant avec des parents CE (N = 23) dans deux conditions: le jeu avec les deux parents au même moment et l'observation du dialogue des parents. Les affects des bébés CF étaient moins positifs. Les bébés se tournaient moins triangulairement vers leurs parents et avaient tendance à utiliser un mode interactif moins triangulaire. Les bébés CF avaient donc moins de chances que les bébés CE d'acquérir les compétences nécessaires pour faire face avec une interaction triangulaire.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
This paper presents a pilot project to reinforce participatory practices in standardization. The INTERNORM project is funded by the University of Lausanne, Switzerland. It aims to create an interactive knowledge center based on the sharing of academic skills and the experiences accumulated by the civil society, especially consumer associations, environmental associations and trade unions to strengthen the participatory process of standardization. The first objective of the project is action-oriented: INTERNORM provides a common knowledge pool supporting the participation of civil society actors to international standard-setting activities by bringing them together with academic experts in working groups and by providing logistic and financial support to their participation to meetings of national and international technical committees. The second objective of the project is analytical: the standardization action initiated through INTERNORM provides a research field for a better understanding of the participatory dynamics underpinning international standardization. The paper presents three incentives that explain civil society (non-)involvement in standardization that try to overcome conventional resource-based hypotheses: an operational incentive, related to the use of standards in the selective goods provided by associations to their membership; a thematic incentive, provided by the setting of priorities by strategic committees created in some standardization organization; a rhetorical incentive, related to the discursive resource that civil society concerns offers to the different stakeholders.
Resumo:
This paper presents a pilot project to reinforce participatory practices in standardization. The INTERNORM project creates an interactive knowledge center based on the sharing of academic skills and experiences accumulated by the civil society, especially consumer associations, environmental associations and trade unions to strengthen the participatory process of standardization. The first objective of the project is action-oriented: INTERNORM provides a common knowledge pool supporting the participation of civil society actors to international standard-setting activities by bringing them together with academic experts in working groups and providing logistic and financial support to their participation in meetings of national and international technical committees. The second objective is analytical: the standardization action provides a research field for a better understanding of the participatory dynamics underpinning international standardization. This paper presents three incentives that explain civil society (non-)involvement in standardization that overcome conventional resource-based hypotheses: an operational incentive related to the use of standards in the selective goods provided by associations to their membership; a thematic incentive provided by the setting of priorities by strategic committees created in some standardization organization; and a rhetorical incentive related to the discursive resource that civil society concerns offers to the different stakeholders.
Resumo:
1. Niche theory predicts that the stable coexistence of species within a guild should be associated, if resources are limited, with a mechanism of resource partitioning. Using extensive data on diets, the present study attempts: (i) to test the hypothesis that, in sympatry, the interspecific overlap between the trophic niches of the sibling bat species Myotis myotis and M. blythii-which coexist intimately in their roosts-is effectively lower than the two intraspecific overlaps; (ii) to assess the role played by interspecific competition in resource partitioning through the study of trophic niche displacement between several sympatric and allopatric populations. 2. Diets were determined by the analysis of faecal samples collected in the field from individual bats captured in various geographical areas. Trophic niche overlaps were calculated monthly for all possible intraspecific and interspecific pairs of individuals from sympatric populations. Niche breadth was estimated from: (i) every faecal sample; (ii) all the faecal samples collected per month in a given population (geographical area). 3. In every population, the bulk of the diets of M. myotis and M. blythii consisted of, respectively, terrestrial (e.g. carabid beetles) and grass-dwelling (mostly bush crickets) prey. All intraspecific trophic niche overlaps were significantly greater than the interspecific one, except in Switzerland in May when both species exploited mass concentrations of cockchafers, a non-limiting food source. This clearcut partitioning of resources may allow the stable, intimate coexistence observed under sympatric conditions. 4. Relative proportions of ground-and grass-dwelling prey, as well as niche breadths (either individual or population), did not differ significantly between sympatry and allopatry, showing that, under allopatric conditions, niche expansion does not take place. This suggests that active interspecific competition is not the underlying mechanism responsible for the niche partitioning which is currently observed between M. myotis and M. blythii.