927 resultados para Random finite set theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents several algorithms for joint estimation of the target number and state in a time-varying scenario. Building on the results presented in [1], which considers estimation of the target number only, we assume that not only the target number, but also their state evolution must be estimated. In this context, we extend to this new scenario the Rao-Blackwellization procedure of [1] to compute Bayes recursions, thus defining reduced-complexity solutions for the multi-target set estimator. A performance assessmentis finally given both in terms of Circular Position Error Probability - aimed at evaluating the accuracy of the estimated track - and in terms of Cardinality Error Probability, aimed at evaluating the reliability of the target number estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequential randomized prediction of an arbitrary binary sequence isinvestigated. No assumption is made on the mechanism of generating the bit sequence. The goal of the predictor is to minimize its relative loss, i.e., to make (almost) as few mistakes as the best ``expert'' in a fixed, possibly infinite, set of experts. We point out a surprising connection between this prediction problem and empirical process theory. First, in the special case of static (memoryless) experts, we completely characterize the minimax relative loss in terms of the maximum of an associated Rademacher process. Then we show general upper and lower bounds on the minimaxrelative loss in terms of the geometry of the class of experts. As main examples, we determine the exact order of magnitude of the minimax relative loss for the class of autoregressive linear predictors and for the class of Markov experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We obtain minimax lower bounds on the regret for the classicaltwo--armed bandit problem. We provide a finite--sample minimax version of the well--known log $n$ asymptotic lower bound of Lai and Robbins. Also, in contrast to the log $n$ asymptotic results on the regret, we show that the minimax regret is achieved by mere random guessing under fairly mild conditions on the set of allowable configurations of the two arms. That is, we show that for {\sl every} allocation rule and for {\sl every} $n$, there is a configuration such that the regret at time $n$ is at least 1 -- $\epsilon$ times the regret of random guessing, where $\epsilon$ is any small positive constant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We will call a game a reachable (pure strategy) equilibria game if startingfrom any strategy by any player, by a sequence of best-response moves weare able to reach a (pure strategy) equilibrium. We give a characterizationof all finite strategy space duopolies with reachable equilibria. Wedescribe some applications of the sufficient conditions of the characterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In earlier work, the present authors have shown that hardness profiles are less dependent on the level of calculation than energy profiles for potential energy surfaces (PESs) having pathological behaviors. At variance with energy profiles, hardness profiles always show the correct number of stationary points. This characteristic has been used to indicate the existence of spurious stationary points on the PESs. In the present work, we apply this methodology to the hydrogen fluoride dimer, a classical difficult case for the density functional theory methods

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many traits and/or strategies expressed by organisms are quantitative phenotypes. Because populations are of finite size and genomes are subject to mutations, these continuously varying phenotypes are under the joint pressure of mutation, natural selection and random genetic drift. This article derives the stationary distribution for such a phenotype under a mutation-selection-drift balance in a class-structured population allowing for demographically varying class sizes and/or changing environmental conditions. The salient feature of the stationary distribution is that it can be entirely characterized in terms of the average size of the gene pool and Hamilton's inclusive fitness effect. The exploration of the phenotypic space varies exponentially with the cumulative inclusive fitness effect over state space, which determines an adaptive landscape. The peaks of the landscapes are those phenotypes that are candidate evolutionary stable strategies and can be determined by standard phenotypic selection gradient methods (e.g. evolutionary game theory, kin selection theory, adaptive dynamics). The curvature of the stationary distribution provides a measure of the stability by convergence of candidate evolutionary stable strategies, and it is evaluated explicitly for two biological scenarios: first, a coordination game, which illustrates that, for a multipeaked adaptive landscape, stochastically stable strategies can be singled out by letting the size of the gene pool grow large; second, a sex-allocation game for diploids and haplo-diploids, which suggests that the equilibrium sex ratio follows a Beta distribution with parameters depending on the features of the genetic system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that any cooperative TU game is the maximum of a finite collection of convex games. This max-convex decomposition can be refined by using convex games with non-negative dividends for all coalitions of at least two players. As a consequence of the above results we show that the class of modular games is a set of generators of the distributive lattice of all cooperative TU games. Finally, we characterize zero-monotonic games using a strong max-convex decomposition

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A static comparative study on set-solutions for cooperative TU games is carried out. The analysis focuses on studying the compatibility between two classical and reasonable properties introduced by Young (1985) in the context of single valued solutions, namely core-selection and coalitional monotonicity. As the main result, it is showed that coalitional monotonicity is not only incompatible with the core-selection property but also with the bargaining-selection property. This new impossibility result reinforces the tradeoff between these kinds of interesting and intuitive economic properties. Positive results about compatibility between desirable economic properties are given replacing the core selection requirement by the core-extension property.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides an axiomatic framework to compare the D-core (the set of undominatedimputations) and the core of a cooperative game with transferable utility. Theorem1 states that the D-core is the only solution satisfying projection consistency, reasonableness (from above), (*)-antimonotonicity, and modularity. Theorem 2 characterizes the core replacing (*)-antimonotonicity by antimonotonicity. Moreover, these axioms alsocharacterize the core on the domain of convex games, totally balanced games, balancedgames, and superadditive games