841 resultados para Sharing rules
Resumo:
In this paper we give a generalization of the serial cost-sharing rule defined by Moulin and Shenker (1992) for cost sharing problems. According to the serial cost sharing rule, agents with low demands of a good pay cost increments associated with low quantities in the production process of that good. This fact might not always be desirable for those agents, since those cost increments might be higher than others, for example with concave cost functions. In this paper we give a family of cost sharing rules which allocates cost increments in all the possible places in the production process. And we characterize axiomatically each of them by means of an axiomatic characterization related to the one given for the serial cost-sharing rule by Moulin and Shenker (1994).
Resumo:
We reconsider the discrete version of the axiomatic cost-sharing model. We propose a condition of (informational) coherence requiring that not all informational refinements of a given problem be solved differently from the original problem. We prove that strictly coherent linear cost-sharing rules must be simple random-order rules.
Resumo:
We consider a network in which several service providers offer wireless access to their respective subscribed customers through potentially multihop routes. If providers cooperate by jointly deploying and pooling their resources, such as spectrum and infrastructure (e.g., base stations) and agree to serve each others' customers, their aggregate payoffs, and individual shares, may substantially increase through opportunistic utilization of resources. The potential of such cooperation can, however, be realized only if each provider intelligently determines with whom it would cooperate, when it would cooperate, and how it would deploy and share its resources during such cooperation. Also, developing a rational basis for sharing the aggregate payoffs is imperative for the stability of the coalitions. We model such cooperation using the theory of transferable payoff coalitional games. We show that the optimum cooperation strategy, which involves the acquisition, deployment, and allocation of the channels and base stations (to customers), can be computed as the solution of a concave or an integer optimization. We next show that the grand coalition is stable in many different settings, i.e., if all providers cooperate, there is always an operating point that maximizes the providers' aggregate payoff, while offering each a share that removes any incentive to split from the coalition. The optimal cooperation strategy and the stabilizing payoff shares can be obtained in polynomial time by respectively solving the primals and the duals of the above optimizations, using distributed computations and limited exchange of confidential information among the providers. Numerical evaluations reveal that cooperation substantially enhances individual providers' payoffs under the optimal cooperation strategy and several different payoff sharing rules.
Resumo:
This paper outlines a novel information sharing method using Binary Decision Diagrams (BBDs). It is inspired by the work of Al-Shaer and Hamed, who applied BDDs into the modelling of network firewalls. This is applied into an information sharing policy system which optimizes the search of redundancy, shadowing, generalisation and correlation within information sharing rules.
Resumo:
This paper studies cost-sharing rules under dynamic adverse selection. We present a typical principal-agent model with two periods, set up in Laffont and Tirole's (1986) canonical regulation environment. At first, when the contract is signed, the firm has prior uncertainty about its efficiency parameter. In the second period, the firm learns its efficiency and chooses the level of cost-reducing effort. The optimal mechanism sequentially screens the firm's types and achieves a higher level of welfare than its static counterpart. The contract is indirectly implemented by a sequence of transfers, consisting of a fixed advance payment based on the reported cost estimate, and an ex-post compensation linear in cost performance.
Resumo:
Includes bibliography
Resumo:
This paper examines empirically the impacts of sharing rules of origin (RoOs) with other ASEAN+1 free trade agreements (FTAs) on ASEAN-Korea FTA/ASEAN-China FTA utilization in Thai exports in 2011. Our careful empirical analysis suggests that the harmonization of RoOs across FTAs play some role in reducing the costs yielded through the spaghetti bowl phenomenon. In particular, the harmonization to "change-in-tariff classification (CTC) or real value-added content (RVC)" will play a relatively positive role in not seriously discouraging firms’ use of multiple FTA schemes. On the other hand, the harmonization to CTC or CTC&RVC hinders firms from using those schemes.
Resumo:
In order to study the failure of disordered materials, the ensemble evolution of a nonlinear chain model was examined by using a stochastic slice sampling method. The following results were obtained. (1) Sample-specific behavior, i.e. evolutions are different from sample to sample in some cases under the same macroscopic conditions, is observed for various load-sharing rules except in the globally mean field theory. The evolution according to the cluster load-sharing rule, which reflects the interaction between broken clusters, cannot be predicted by a simple criterion from the initial damage pattern and even then is most complicated. (2) A binary failure probability, its transitional region, where globally stable (GS) modes and evolution-induced catastrophic (EIC) modes coexist, and the corresponding scaling laws are fundamental to the failure. There is a sensitive zone in the vicinity of the boundary between the GS and EIC regions in phase space, where a slight stochastic increment in damage can trigger a radical transition from GS to EIC. (3) The distribution of strength is obtained from the binary failure probability. This, like sample-specificity, originates from a trans-scale sensitivity linking meso-scopic and macroscopic phenomena. (4) Strong fluctuations in stress distribution different from that of GS modes may be assumed as a precursor of evolution-induced catastrophe (EIC).
Resumo:
Dans certaines circonstances, des actions de groupes sont plus performantes que des actions individuelles. Dans ces situations, il est préférable de former des coalitions. Ces coalitions peuvent être disjointes ou imbriquées. La littérature économique met un fort accent sur la modélisation des accords où les coalitions d’agents économiques sont des ensembles disjoints. Cependant on observe dans la vie de tous les jours que les coalitions politiques, environnementales, de libre-échange et d’assurance informelles sont la plupart du temps imbriquées. Aussi, devient-il impératif de comprendre le fonctionnement économique des coalitions imbriquées. Ma thèse développe un cadre d’analyse qui permet de comprendre la formation et la performance des coalitions même si elles sont imbriquées. Dans le premier chapitre je développe un jeu de négociation qui permet la formation de coalitions imbriquées. Je montre que ce jeu admet un équilibre et je développe un algorithme pour calculer les allocations d’équilibre pour les jeux symétriques. Je montre que toute structure de réseau peut se décomposer de manière unique en une structure de coalitions imbriquées. Sous certaines conditions, je montre que cette structure correspond à une structure d’équilibre d’un jeu sous-jacent. Dans le deuxième chapitre j’introduis une nouvelle notion de noyau dans le cas où les coalitions imbriquées sont permises. Je montre que cette notion de noyau est une généralisation naturelle de la notion de noyau de structure de coalitions. Je vais plus loin en introduisant des agents plus raffinés. J’obtiens alors le noyau de structure de coalitions imbriquées que je montre être un affinement de la première notion. Dans la suite de la thèse, j’applique les théories développées dans les deux premiers chapitres à des cas concrets. Le troisième chapitre est une application de la relation biunivoque établie dans le premier chapitre entre la formation des coalitions et la formation de réseaux. Je propose une modélisation réaliste et effective des assurances informelles. J’introduis ainsi dans la littérature économique sur les assurances informelles, quatre innovations majeures : une fusion entre l’approche par les groupes et l’approche par les réseaux sociaux, la possibilité d’avoir des organisations imbriquées d’assurance informelle, un schéma de punition endogène et enfin les externalités. Je caractérise les accords d’assurances informelles stables et j’isole les conditions qui poussent les agents à dévier. Il est admis dans la littérature que seuls les individus ayant un revenu élevé peuvent se permettre de violer les accords d’assurances informelles. Je donne ici les conditions dans lesquelles cette hypothèse tient. Cependant, je montre aussi qu’il est possible de violer cette hypothèse sous d’autres conditions réalistes. Finalement je dérive des résultats de statiques comparées sous deux normes de partage différents. Dans le quatrième et dernier chapitre, je propose un modèle d’assurance informelle où les groupes homogènes sont construits sur la base de relations de confiance préexistantes. Ces groupes sont imbriqués et représentent des ensembles de partage de risque. Cette approche est plus générale que les approches traditionnelles de groupe ou de réseau. Je caractérise les accords stables sans faire d’hypothèses sur le taux d’escompte. J’identifie les caractéristiques des réseaux stables qui correspondent aux taux d’escomptes les plus faibles. Bien que l’objectif des assurances informelles soit de lisser la consommation, je montre que des effets externes liés notamment à la valorisation des liens interpersonnels renforcent la stabilité. Je développe un algorithme à pas finis qui égalise la consommation pour tous les individus liés. Le fait que le nombre de pas soit fini (contrairement aux algorithmes à pas infinis existants) fait que mon algorithme peut inspirer de manière réaliste des politiques économiques. Enfin, je donne des résultats de statique comparée pour certaines valeurs exogènes du modèle.
Resumo:
Le but de cette étude est de déterminer qui paie pour le risque de pollution et par conséquent de vérifier si le principe du pollueur-payeur est effectivement mis en œuvre dans le domaine de la gestion du risque environnemental. Il s’agit d’examiner le degré de mutualisation de la gestion du risque dans différentes législations particulière. Les payeurs peuvent a priori se classer dans quatre catégories : les personnes dont l’activité contribue au risque de pollution, les compagnies d’assurance qui acceptent d’assurer ces personnes, les organismes ou autorités publics et les tiers. Divers exemples issus de la législation belge ou européenne seront examinés afin de déterminer s’ils sont conformes à la lettre et/ou à l’esprit du principe pollueur-payeur. Il s’agit notamment de la responsabilité civile, de la responsabilité environnementale, de la gestion des déchets et du marché de quotas d’émissions de gaz à effet de serre. Les techniques de responsabilité qui interviennent après que le dommage ait lieu et requièrent la démonstration de l’existence d’un lien de causalité ne permettent pas toujours d’assurer pleinement la fonction préventive du principe du pollueur-payeur. Elles ne constituent pas des instruments adéquats de gestion de la pollution diffuse ou chronique. En conséquence, des techniques de mutualisation de la gestion du risque environnemental se sont développées. Le recours à ces techniques de mutualisation (par le recours à l’assurance, aux fonds publics financés par la fiscalité environnementale ou aux marchés de droit d’émissions) est-il conforme au principe pollueur-payeur et permet-il d’atteindre l’objectif d’un niveau élevé de protection de l’environnement ? L’effet dissuasif du principe pollueur-payeur n’est-il pas amoindri par la mutualisation ? L’article montre que la définition du principe pollueur-payeur par la Cour de Justice de l’Union européenne est centrée sur la contribution au risque de pollution ce qui permet de recourir aux techniques de mutualisation de la gestion du risque tout en respectant le Traité sur le fonctionnement de l’Union européenne.
Resumo:
Recent research suggests that aggressive driving may be influenced by driver perceptions of their interactions with other drivers in terms of ‘right’ or ‘wrong’ behaviour. Drivers appear to take a moral standpoint on ‘right’ or ‘wrong’ driving behaviour. However, ‘right’ or ‘wrong’ in the context of road use is not defined solely by legislation, but includes informal rules that are sometimes termed ‘driving etiquette’. Driving etiquette has implications for road safety and public safety since breaches of both formal and informal rules may result in moral judgement of others and subsequent behaviours designed to punish the ‘offender’ or ‘teach them a lesson’. This paper outlines qualitative research that was undertaken with drivers to explore their understanding of driving etiquette and how they reacted to other drivers’ observance or violation of their understanding. The aim was to develop an explanatory framework within which the relationships between driving etiquette and aggressive driving could be understood, specifically moral judgement of other drivers and punishment of their transgression of driving etiquette. Thematic analysis of focus groups (n=10) generated three main themes: (1) courtesy and reciprocity, and the notion of two-way responsibility, with examples of how expectations of courteous behaviour vary according to the traffic interaction; (2) acknowledgement and shared social experience: ‘giving the wave’; and (3) responses to breaches of the expectations/informal rules. The themes are discussed in terms of their roles in an explanatory framework of the informal rules of etiquette and how interactions between drivers can reinforce or weaken a driver’s understanding of driver etiquette and potentially lead to driving aggression.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
Young novice drivers are at considerable risk of injury on the road. Their behaviour appears vulnerable to the social influence of their parents and friends. The nature and mechanisms of parent and peer influence on young novice driver (16–25 years) behaviour was explored via small group interviews (n = 21) and two surveys (n1 = 1170, n2 = 390) to inform more effective young driver countermeasures. Parental and peer influence occurred in preLicence, Learner, and Provisional (intermediate) periods. Pre-Licence and unsupervised Learner drivers reported their parents were less likely to punish risky driving (e.g., speeding). These drivers were more likely to imitate their parents and reported their parents were also risky drivers. Young novice drivers who experienced or expected social punishments from peers, including ‘being told off’ for risky driving, reported less riskiness. Conversely drivers who experienced or expected social rewards such as being ‘cheered on’ by friends – who were also more risky drivers – reported more risky driving including crashes and offences. Interventions enhancing positive influence and curtailing negative influence may improve road safety outcomes not only for young novice drivers, but for all persons who share the road with them. Parent-specific interventions warrant further development and evaluation including: modelling safe driving behaviour by parents; active monitoring of driving during novice licensure; and sharing the family vehicle during the intermediate phase. Peer-targeted interventions including modelling of safe driving behaviour and attitudes; minimisation of social reinforcement and promotion of social sanctions for risky driving also need further development and evaluation.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.