967 resultados para large deviation theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We draw on evidence scattered across thick descriptions of organizations to outline an alternative model of routine. Instead of defining routine as a process of compliance with prescribed rules and procedures we define it as a process of deviation from the prescribed elements of organizations, resulting from the mutual constitution of repetitive work and improvisation. This view of routine underscores its adaptive nature and suggests that flexibility can be achieved not only by nimble and openly innovative organizations but also by large and organizations engaging in ‘closet’ innovation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many extensions of the Standard Model predict the existence of charged heavy long-lived particles, such as R-hadrons or charginos. These particles, if produced at the Large Hadron Collider, should be moving non-relativistically and are therefore identifiable through the measurement of an anomalously large specific energy loss in the ATLAS pixel detector. Measuring heavy long-lived particles through their track parameters in the vicinity of the interaction vertex provides sensitivity to metastable particles with lifetimes from 0.6 ns to 30 ns. A search for such particles with the ATLAS detector at the Large Hadron Collider is presented, based on a data sample corresponding to an integrated luminosity of 18.4 fb−1 of pp collisions at s√ = 8 TeV. No significant deviation from the Standard Model background expectation is observed, and lifetime-dependent upper limits on R-hadrons and chargino production are set. Gluino R-hadrons with 10 ns lifetime and masses up to 1185 GeV are excluded at 95% confidence level, and so are charginos with 15 ns lifetime and masses up to 482 GeV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Ciências Actuariais

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is known that, in a locally presentable category, localization exists with respect to every set of morphisms, while the statement that localization with respect to every (possibly proper) class of morphisms exists in locally presentable categories is equivalent to a large-cardinal axiom from set theory. One proves similarly, on one hand, that homotopy localization exists with respect to sets of maps in every cofibrantly generated, left proper, simplicial model category M whose underlying category is locally presentable. On the other hand, as we show in this article, the existence of localization with respect to possibly proper classes of maps in a model category M satisfying the above assumptions is implied by a large-cardinal axiom called Vopënka's principle, although we do not know if the reverse implication holds. We also show that, under the same assumptions on M, every endofunctor of M that is idempotent up to homotopy is equivalent to localization with respect to some class S of maps, and if Vopënka's principle holds then S can be chosen to be a set. There are examples showing that the latter need not be true if M is not cofibrantly generated. The above assumptions on M are satisfied by simplicial sets and symmetric spectra over simplicial sets, among many other model categories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study pair-wise decentralized trade in dynamic markets with homogeneous, non-atomic, buyers and sellers that wish to exchange one unit. Pairs of traders are randomly matched and bargaining a price under rules that offer the freedom to quit the match at any time. Market equilbria, prices and trades over time, are characterized. The asymptotic behavior of prices and trades as frictions (search costs and impatience) vanish, and the conditions for (non) convergence to walrasian prices are explored. As a side product of independent interest, we present a self-contained theory of non-cooperative bargaining with two-sided, time-varying, outside options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a theory of the joint allocation of formal control and cash-flow rights in venture capital deals. We argue that when the need for investor support calls for very high-powered outside claims, entrepreneurs should optimally retain formal control in order to avoid excessive interference. Hence, we predict that risky claims should be be negatively correlated to control rights, both along the life of a start-up and across deals. This challenges the idea that risky claims should a ways be associated to more formal control, and is in line with contractual terms increasingly used in venture capital, in corporate venturing and in partnership deals between biotech start-ups and large drug companies. The paper provides a theoretical explanation to some puzzling evidence documented in Gompers (1997) and Kaplan and Stromberg (2000), namely the inclusion in venture capital contracts of contingencies that trigger both a reduction in VC control and the conversion! of her preferred stocks into common stocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the process by which subordinated regions of a country can obtain a more favourable political status. In our theoretical model a dominant and a dominated region first interact through a voting process that can lead to different degrees of autonomy. If this process fails then both regions engage in a costly political conflict which can only lead to the maintenance of the initial subordination of the region in question or to its complete independence. In the subgame-perfect equilibrium the voting process always leads to an intermediate arrangement acceptable for both parts. Hence, the costly political struggle never occurs. In contrast, in our experiments we observe a large amount of fighting involving high material losses, even in a case in which the possibilities for an arrangement without conflict are very salient. In our experimental environment intermediate solutions are feasible and stable, but purely emotional elements prevent them from being reached.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adverse selection may thwart trade between an informed seller, who knows the probability p that an item of antiquity is genuine, and an uninformed buyer, who does not know p. The buyer might not be wholly uninformed, however. Suppose he can perform a simple inspection, a test of his own: the probability that an item passes the test is g if the item is genuine, but only f < g if it is fake. Given that the buyer is no expert, his test may have little power: f may be close to g. Unfortunately, without much power, the buyer's test will not resolve the difficulty of adverse selection; gains from trade may remain unexploited. But now consider a "store", where the seller groups a number of items, perhaps all with the same quality, the same probability p of being genuine. (We show that in equilibrium the seller will choose to group items in this manner.) Now the buyer can conduct his test across a large sample, perhaps all, of a group of items in the seller's store. He can thereby assess the overall quality of these items; he can invert the aggregate of his test results to uncover the underlying p; he can form a "prior". There is thus no longer asymmetric information between seller and buyer: gains from trade can be exploited. This is our theory of retailing: by grouping items together - setting up a store - a seller is able to supply buyers with priors, as well as the items themselves. We show that the weaker the power of the buyer�s test (the closer f is to g), the greater the seller�s profit. So the seller has no incentive to assist the buyer � e.g., by performing her own tests on the items, or by cleaning them to reveal more about their true age. The paper ends with an analysis of which sellers should specialise in which qualities. We show that quality will be low in busy locations and high in expensive locations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present existence, uniqueness and continuous dependence results for some kinetic equations motivated by models for the collective behavior of large groups of individuals. Models of this kind have been recently proposed to study the behavior of large groups of animals, such as flocks of birds, swarms, or schools of fish. Our aim is to give a well-posedness theory for general models which possibly include a variety of effects: an interaction through a potential, such as a short-range repulsion and long-range attraction; a velocity-averaging effect where individuals try to adapt their own velocity to that of other individuals in their surroundings; and self-propulsion effects, which take into account effects on one individual that are independent of the others. We develop our theory in a space of measures, using mass transportation distances. As consequences of our theory we show also the convergence of particle systems to their corresponding kinetic equations, and the local-in-time convergence to the hydrodynamic limit for one of the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An ab initio structure prediction approach adapted to the peptide-major histocompatibility complex (MHC) class I system is presented. Based on structure comparisons of a large set of peptide-MHC class I complexes, a molecular dynamics protocol is proposed using simulated annealing (SA) cycles to sample the conformational space of the peptide in its fixed MHC environment. A set of 14 peptide-human leukocyte antigen (HLA) A0201 and 27 peptide-non-HLA A0201 complexes for which X-ray structures are available is used to test the accuracy of the prediction method. For each complex, 1000 peptide conformers are obtained from the SA sampling. A graph theory clustering algorithm based on heavy atom root-mean-square deviation (RMSD) values is applied to the sampled conformers. The clusters are ranked using cluster size, mean effective or conformational free energies, with solvation free energies computed using Generalized Born MV 2 (GB-MV2) and Poisson-Boltzmann (PB) continuum models. The final conformation is chosen as the center of the best-ranked cluster. With conformational free energies, the overall prediction success is 83% using a 1.00 Angstroms crystal RMSD criterion for main-chain atoms, and 76% using a 1.50 Angstroms RMSD criterion for heavy atoms. The prediction success is even higher for the set of 14 peptide-HLA A0201 complexes: 100% of the peptides have main-chain RMSD values &lt; or =1.00 Angstroms and 93% of the peptides have heavy atom RMSD values &lt; or =1.50 Angstroms. This structure prediction method can be applied to complexes of natural or modified antigenic peptides in their MHC environment with the aim to perform rational structure-based optimizations of tumor vaccines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Es presenta un nou algorisme per a la diagonalització de matrius amb diagonal dominant. Es mostra la seva eficàcia en el tractament de matrius no simètriques, amb elements definits sobre el cos complex i, fins i tot, de grans dimensions. Es posa de manifest la senzillesa del mètode així com la facilitat d'implementació en forma de codi de programació. Es comentenels seus avantatges i característiques limitants, així com algunes de les millores que es poden implementar. Finalment, es mostren alguns exemples numèrics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the minimum mean square error (MMSE) and the multiuser efficiency η of large dynamic multiple access communication systems in which optimal multiuser detection is performed at the receiver as the number and the identities of active users is allowed to change at each transmission time. The system dynamics are ruled by a Markov model describing the evolution of the channel occupancy and a large-system analysis is performed when the number of observations grow large. Starting on the equivalent scalar channel and the fixed-point equation tying multiuser efficiency and MMSE, we extend it to the case of a dynamic channel, and derive lower and upper bounds for the MMSE (and, thus, for η as well) holding true in the limit of large signal–to–noise ratios and increasingly large observation time T.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper advances a highly tractable model with search theoretic foundations for money and neoclassical growth. In the model, manufacturingand commerce are distinct and separate activities. In manufacturing,goods are efficiently produced combining capital and labor. In commerce,goods are exchanged in bilateral meetings. The model is applied to studythe effects of inßation on capital accumulation and welfare. With realisticparameters, inflation has large negative effects on welfare even though itraises capital and output. In contrast, with cash-in-advance, a deviceinformally motivated with bilateral trading, inflation depresses capitaland output and has a negligible effect on welfare.