40 resultados para display rules
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
Background: Although CD4 cell count monitoring is used to decide when to start antiretroviral therapy in patients with HIV-1 infection, there are no evidence-based recommendations regarding its optimal frequency. It is common practice to monitor every 3 to 6 months, often coupled with viral load monitoring. We developed rules to guide frequency of CD4 cell count monitoring in HIV infection before starting antiretroviral therapy, which we validated retrospectively in patients from the Swiss HIV Cohort Study.Methodology/Principal Findings: We built up two prediction rules ("Snap-shot rule" for a single sample and "Track-shot rule" for multiple determinations) based on a systematic review of published longitudinal analyses of CD4 cell count trajectories. We applied the rules in 2608 untreated patients to classify their 18 061 CD4 counts as either justifiable or superfluous, according to their prior >= 5% or < 5% chance of meeting predetermined thresholds for starting treatment. The percentage of measurements that both rules falsely deemed superfluous never exceeded 5%. Superfluous CD4 determinations represented 4%, 11%, and 39% of all actual determinations for treatment thresholds of 500, 350, and 200x10(6)/L, respectively. The Track-shot rule was only marginally superior to the Snap-shot rule. Both rules lose usefulness for CD4 counts coming near to treatment threshold.Conclusions/Significance: Frequent CD4 count monitoring of patients with CD4 counts well above the threshold for initiating therapy is unlikely to identify patients who require therapy. It appears sufficient to measure CD4 cell count 1 year after a count > 650 for a threshold of 200, > 900 for 350, or > 1150 for 500x10(6)/L, respectively. When CD4 counts fall below these limits, increased monitoring frequency becomes advisable. These rules offer guidance for efficient CD4 monitoring, particularly in resource-limited settings.
Resumo:
We performed numerical simulations of DNA chains to understand how local geometry of juxtaposed segments in knotted DNA molecules can guide type II DNA topoisomerases to perform very efficient relaxation of DNA knots. We investigated how the various parameters defining the geometry of inter-segmental juxtapositions at sites of inter-segmental passage reactions mediated by type II DNA topoisomerases can affect the topological consequences of these reactions. We confirmed the hypothesis that by recognizing specific geometry of juxtaposed DNA segments in knotted DNA molecules, type II DNA topoisomerases can maintain the steady-state knotting level below the topological equilibrium. In addition, we revealed that a preference for a particular geometry of juxtaposed segments as sites of strand-passage reaction enables type II DNA topoisomerases to select the most efficient pathway of relaxation of complex DNA knots. The analysis of the best selection criteria for efficient relaxation of complex knots revealed that local structures in random configurations of a given knot type statistically behave as analogous local structures in ideal geometric configurations of the corresponding knot type.
Resumo:
RÉSUMÉ : Le bullying est un type de comportement agressif qu'un élève (ou plusieurs) fait subir à un autre et qui se manifeste par des agressions verbales, physiques et/ou psychologiques. Les caractéristiques du bullying sont la répétitivité d'actions négatives sur le long terme et une relation de pouvoir asymétrique. Pour la victime, ce type de comportement peut avoir des conséquences graves telles qu'échec scolaire, dépression, troubles alimentaires, ou idées suicidaires. De plus, les auteurs de bullying commettent plus de comportements déviants au sein de l'école ou à l'extérieur de cette dernière. La mise en place d'actions ciblées auprès des auteurs de bullying pourrait donc non seulement prévenir une victimisation, mais aussi réduire les actes de délinquance en général. Hormis quelques études locales ou cantonales, aucune recherche nationale auprès d'adolescents n'existait dans le domaine. Ce travail propose de combler cette lacune afin d'obtenir une compréhension suffisante du phénomène qui permet de donner des pistes pour définir des mesures de prévention appropriées. Afin d'appréhender la problématique du bullying dans les écoles secondaires suisses, deux sondages de délinquance juvénile autoreportée ont été effectués. Le premier a eu lieu entre 2003 et 2005 dans le canton de Vaud auprès de plus de 4500 écoliers. Le second a été administré en 2006 dans toute la Suisse et environ 3600 jeunes y ont participé. Les jeunes ont répondu au sondage soit en classe (questionnaire papier) soit en salle d'informatique (questionnaire en ligne). Les jeunes ayant répondu avoir sérieusement harcelé un autre élève est d'environ 7% dans le canton de Vaud et de 4% dans l'échantillon national. Les analyses statistiques ont permis tout d'abord de sélectionner les variables les plus fortement liées au bullying. Les résultats montrent que les jeunes avec un bas niveau d'autocontrôle et ayant une attitude positive envers la violence sont plus susceptibles de commettre des actes de bullying. L'importance des variables environnementales a aussi été démontrée: plus le jeune est supervisé et encadré par des adultes, plus les autorités (école, voisinage) jouent leur rôle de contrôle social en faisant respecter les règles et en intervenant de manière impartiale, moins le jeune risque de commettre des actes de bullying. De plus, l'utilisation d'analyses multiniveaux a permis de montrer l'existence d'effets de l'école sur le bullying. En particulier, le taux de bullying dans une école donnée augmente lorsque les avis des jeunes divergent par rapport à leur perception du climat scolaire. Un autre constat que l'on peut mettre en évidence est que la réaction des enseignants lors de bagarres a une influence différente sur le taux de bullying en fonction de l'établissement scolaire. ABSTRACT : Bullying is the intentional, repetitive or persistent hurting of one pupil by another (or several), where the relationship involves an imbalance of power. Bullying is a type of aggressive behaviour and the act can be verbal, physical and/or psychological. The consequences on the victims are serious: school failure, depressive symptomatology, eating disorders, or suicidal ideation. Moreover, the authors of bullying display more delinquent behaviour within or outside the school. Thus, preventive programmes targeting bullying could not only prevent victimisation, but also reduce delinquency in general. Very little data concerning bullying had been collected in Switzerland and, except some local or cantonal studies, no national research among teenagers existed in the field. This work intends to fill the gap in order to provide sufficient understanding of the phenomenon and to suggest some tracks for defining appropriate measures of prevention. In order to understand the problems of bullying in Swiss secondary schools better, two surveys of self-reported juvenile delinquency were carried out. The first one took place between 2003 and 2005 in the canton Vaud among more than 4500 pupils, the second in 2006 across Switzerland with about 3600 youths taking part. The pupils answered to the survey either in the classroom (paper questionnaire) or in the computer room (online questionnaire). The youths that answered having seriously bullied another pupil are about 7% in canton Vaud and 4% in the national sample. Statistical analyses have selected the variables most strongly related to bullying. The results show that the youths with a low level of self-control and adopting a positive attitude towards violence are more likely to bully others. The importance of the environmental variables was also shown: the more that youth is supervised and monitored by adults, and the more the authorities (school, neighbourhood) play their role of social control by making the rules be respected through intervening in an impartial way, the less the youth bully. Moreover, the use of multilevel analyses permitted to show the existence of effects of the school on bullying. In particular, the rate of bullying in a given school increases when there is a wide variation among students of the same school in their perception of their school climate. Another important aspect concerns teachers' reactions when pupils fight: this variable does not influence the bullying rate to the same extent, and depends on the school.
Resumo:
Understanding how communities of living organisms assemble has been a central question in ecology since the early days of the discipline. Disentangling the different processes involved in community assembly is not only interesting in itself but also crucial for an understanding of how communities will behave under future environmental scenarios. The traditional concept of assembly rules reflects the notion that species do not co-occur randomly but are restricted in their co-occurrence by interspecific competition. This concept can be redefined in a more general framework where the co-occurrence of species is a product of chance, historical patterns of speciation and migration, dispersal, abiotic environmental factors, and biotic interactions, with none of these processes being mutually exclusive. Here we present a survey and meta-analyses of 59 papers that compare observed patterns in plant communities with null models simulating random patterns of species assembly. According to the type of data under study and the different methods that are applied to detect community assembly, we distinguish four main types of approach in the published literature: species co-occurrence, niche limitation, guild proportionality and limiting similarity. Results from our meta-analyses suggest that non-random co-occurrence of plant species is not a widespread phenomenon. However, whether this finding reflects the individualistic nature of plant communities or is caused by methodological shortcomings associated with the studies considered cannot be discerned from the available metadata. We advocate that more thorough surveys be conducted using a set of standardized methods to test for the existence of assembly rules in data sets spanning larger biological and geographical scales than have been considered until now. We underpin this general advice with guidelines that should be considered in future assembly rules research. This will enable us to draw more accurate and general conclusions about the non-random aspect of assembly in plant communities.
Resumo:
The objective of this paper is to discuss whether children have a capacity for deonticreasoning that is irreducible to mentalizing. The results of two experiments point tothe existence of such non-mentalistic understanding and prediction of the behaviourof others. In Study 1, young children (3- and 4-year-olds) were told different versionsof classic false-belief tasks, some of which were modified by the introduction of a ruleor a regularity. When the task (a standard change of location task) included a rule, theperformance of 3-year-olds, who fail traditional false-belief tasks, significantly improved.In Study 2, 3-year-olds proved to be able to infer a rule from a social situation and touse it in order to predict the behaviour of a character involved in a modified versionof the false-belief task. These studies suggest that rules play a central role in the socialcognition of young children and that deontic reasoning might not necessarily involvemind reading.
Resumo:
Iron uptake and transcriptional regulation by the enantiomeric siderophores pyochelin (Pch) and enantio-pyochelin (EPch) of Pseudomonas aeruginosa and Pseudomonas fluorescens, respectively, are stereospecific processes. The iron-loaded forms of Pch (ferriPch) and of EPch (ferriEPch) are recognized stereospecifically (i) at the outer membrane by the siderophore receptors FptA in P. aeruginosa and FetA in P. fluorescens and (ii) in the cytoplasm by the two AraC-type regulators PchR, which are activated by their cognate siderophore. Here, stereospecific siderophore recognition is shown to occur at the inner membrane also. In P. aeruginosa, translocation of ferriPch across the inner membrane is carried out by the single-subunit siderophore transporter FptX. In contrast, the uptake of ferriEPch into the cytoplasm of P. fluorescens was found to involve a classical periplasmic binding protein-dependent ABC transporter (FetCDE), which is encoded by the fetABCDEF operon. Expression of a translational fetA-gfp fusion was repressed by ferric ions, and activated by the cognate siderophore bound to PchR, thus resembling the analogous regulation of the P. aeruginosa ferriPch transport operon fptABCX. The inner-membrane transporters FetCDE and FptX were expressed in combination with either of the two siderophore receptors FetA and FptA in a siderophore-negative P. aeruginosa mutant deleted for the fptABCX operon. Growth tests conducted under iron limitation with ferriPch or ferriEPch as the iron source revealed that FptX was able to transport ferriPch as well as ferriEPch, whereas FetCDE specifically transported ferriEPch. Thus, stereospecific siderophore recognition occurs at the inner membrane by the FetCDE transporter.
Resumo:
Human cooperation is often based on reputation gained from previous interactions with third parties. Such reputation can be built on generous or punitive actions, and both, one's own reputation and the reputation of others have been shown to influence decision making in experimental games that control for confounding variables. Here we test how reputation-based cooperation and punishment react to disruption of the cognitive processing in different kinds of helping games with observers. Saying a few superfluous words before each interaction was used to possibly interfere with working memory. In a first set of experiments, where reputation could only be based on generosity, the disruption reduced the frequency of cooperation and lowered mean final payoffs. In a second set of experiments where reputation could only be based on punishment, the disruption increased the frequency of antisocial punishment (i.e. of punishing those who helped) and reduced the frequency of punishing defectors. Our findings suggest that working memory can easily be constraining in reputation-based interactions within experimental games, even if these games are based on a few simple rules with a visual display that provides all the information the subjects need to play the strategies predicted from current theory. Our findings also highlight a weakness of experimental games, namely that they can be very sensitive to environmental variation and that quantitative conclusions about antisocial punishment or other behavioral strategies can easily be misleading.
Resumo:
ABSTRACTSchizophrenia is a major psychiatric disorder occurring with a prevalence of 1% in the worldwide population. It develops progressively with psychosis onset in late adolescence or earlyadulthood. The disorder can take many different facets and has a highly diffuse anddistributed neuropathology including deficits in major neurotransmitter systems,myelination, stress regulation, and metabolism. The delayed onset and the heterogeneouspathology suggest that schizophrenia is a developmental disease that arises from interplayof genetic and environmental factors during sensitive periods. Redox dysregulation due to animbalance between pro-oxidants and antioxidant defence mechanisms is among the riskfactors for schizophrenia. Glutathione (GSH) is the major cellular redox regulator andantioxidant. Levels of GSH are decreased in cerebrospinal fluid, prefrontal cortex and postmortemstriatum of schizophrenia patients. Moreover, polymorphisms of the key GSHsynthesizingenzyme, glutamate-cysteine ligase, modifier (GCLM) subunit, are associatedwith the disease, suggesting that GSH deficit is of genetic origin. Here we used miceknockout (KO) for the GCLM gene, which display chronic GSH deficit (~70 to 80% decrease)to investigate the direct link between redox dysregulation and schizophrenia. Accordingly,we evaluated whether GCLM KO compared to normal wildtype mice display behavioralchanges that relate to schizophrenia symptoms and whether their brains showmorphological, functional or metabolic alterations that resemble those in patients.Moreover, we exposed pubertal GCLM mice to repeated mild stress and measured theirhormonal and behavioral stress reactivity. Our data show that chronic GSH deficit isassociated with altered emotion- and stress-related behaviors, deficient prepulse inhibition,pronounced amphetamine-induced hyperlocomotion but normal spatial learning andworking memory. These changes represent important schizophrenia endophenotypes.Moreover, this particular pattern of change indicates impairment of the ventralhippocampus (VH) and related circuitry as opposed to the dorsal hippocampus (DH), which isimplicated in spatial information processing. This is consistent with a selective deficit ofparvalbumin positive interneurons and gamma oscillation in the VH but not DH. Increasedlevels of circulating stress hormones in KO mice following pubertal stress corroborate VHdysfunction as it is involved in negative feedback control of the stress response. VHstructural and functional deficits are frequently found in the schizophrenic brain. Metabolicevaluation of the developing GCLM KO anterior cortex using in vivo magnetic resonancespectroscopy revealed elevated glutamine (Gln), glutamate (Glu), Gln/Glu and N-acetylaspartate(NAA) during the pre-pubertal period. Similar changes are reported in earlyschizophrenia. Overall, we observe phenotypic anomalies in GSH deficient GCLM KO micethat correspond to major schizophrenia endophenotypes. This supports an important rolefor redox dysregulation in schizophrenia and validates the GCLM KO mouse as model for thedisease. Moreover, our results indicate that puberty may be a sensitive period for redoxsensitivechanges highliting the importance of early intervention. Gln, Gln/Glu, Glu and NAAmay qualify as early metabolic biomarkers to identify young at-risk individuals. Since chronictreatment with NAC normalized most metabolic changes in GCLM KO mice, NAC may be oneadjunct treatment of choice for early intervention in patients.RESUMELa schizophrénie est une maladie psychiatrique majeure avec une prévalence de 1% dans lapopulation. Son développement est progressif, les premières psychoses apparaissant àl'adolescence ou au début de l'âge adulte. La maladie a plusieurs présentations et uneneuropathologie étendue, qui inclut des déficits neurochimiques, métaboliques, de lamyélination et de la régulation du stress. L'émergence tardive et l'hétérogénéité de lapathologie suggèrent que la schizophrénie est une maladie développementale, favorisée pardes facteurs génétiques et environnementaux durant des périodes sensibles. La dérégulationrédox, due à un déséquilibre entre facteurs pro-oxidantes et défenses anti-oxidantes,constitue un facteur de risque. Le glutathion (GSH) est le principal régulateur rédox et antioxidantdes cellules, ses taux sont diminués dans le liquide céphalorachidien, le cortexpréfrontal et le striatum de patients. De plus, des variations du gène codant la sous-unitémodulatrice (GCLM) de la glutamate-cystéine ligase, enzyme de synthèse du GSH, sontassociés la maladie, suggérant que le déficit observé chez les patients est d'originegénétique. Nous avons donc utilisé des souris ayant une délétion du gène GCLM (KO), quiont un déficit chronique en GSH (70-80%), afin d'étudier le lien entre une dérégulation rédoxet la schizophrénie. Nous avons évalué si ces souris présentent des altérationscomportementales analogues aux symptômes de la maladie, et des modificationsstructurelles, fonctionnelles et métaboliques au niveau du cerveau, ressemblant à celles despatients. De plus, nous avons soumis les souris à des stresses modérés durant la puberté,puis mesuré les réponses hormonales et comportementales. Les animaux présentent undéficit pré-attentionnel du traitement des informations moto-sensorielles, un déficit pourcertains apprentissages, une réponse accrue à l'amphétamine, mais leurs mémoires spatialeet de travail sont préservées. Ces atteintes comportementales sont analogues à certainsendophénotypes de la schizophrénie. De plus, ces changements comportementaux sontlargement expliqués par une perturbation morphologique et fonctionnelle de l'hippocampeventral (HV). Ainsi, nous avons observé un déficit sélectif des interneurones immunoréactifsà la parvalbumine et une désynchronisation neuronale dans l'HV. L'hippocampe dorsal,impliqué dans l'orientation spatiale, demeure en revanche intact. L'augmentationd'hormones de stress dans le sang des souris KO suite à un stress prépubertal soutien aussil'hypothèse d'une dysfonction de l'HV, connu pour moduler ce type de réponse. Des déficitsstructurels et fonctionnels dans l'hippocampe antérieur (ventral) ont d'ailleurs été rapportéschez des patients schizophrènes. Par de résonance magnétique, nous avons également suivile profil métabolique du le cortex antérieur au cours du développement postnatal des sourisKO. Ces mesures ont révélé des taux élevés de glutamine (Gln), glutamate (Glu), du ratioGln/Glu, et de N-acétyl-aspartate (NAA) durant la période prépubertale. Des altérationssimilaires sont décrites chez les patients durant la phase précoce. Nous avons donc révélédes anomalies phénotypiques chez les souris GCLM KO qui reflètent certainsendophénotypes de la schizophrénie. Nos résultats appuient donc le rôle d'une dérégulationrédox dans l'émergence de la maladie et le potentiel des souris KO comme modèle. De plus,cette étude met en évidence la puberté comme période particulièrement sensible à unedérégulation rédox, renforçant l'importance d'une intervention thérapeutique précoce. Dansce cadre, Gln, Gln/Glu, Glu and NAA seraient des biomarqueurs clés pour identifier de jeunesindividus à risque. De part son efficacité dans notre modèle, NAC pourrait être unesubstance de choix dans le traitement précoce des patients.
Resumo:
Insect societies are paramount examples of cooperation, yet they also harbor internal conflicts whose resolution depends on the power of the opponents. The male-haploid, female-diploid sex-determining system of ants causes workers to be more related to sisters than to brothers, whereas queens are equally related to daughters and sons. Workers should thus allocate more resources to females than to males, while queens should favor an equal investment in each sex. Female-biased sex allocation and manipulation of the sex ratio during brood development suggest that workers prevail in many ant species. Here, we show that queens of Formica selysi strongly influenced colony sex allocation by biasing the sex ratio of their eggs. Most colonies specialized in the production of a single sex. Queens in female-specialist colonies laid a high proportion of diploid eggs, whereas queens in male-specialist colonies laid almost exclusively haploid eggs, which constrains worker manipulation. However, the change in sex ratio between the egg and pupae stages suggests that workers eliminated some male brood, and the population sex-investment ratio was between the queens' and workers' equilibria. Altogether, these data provide evidence for an ongoing conflict between queens and workers, with a prominent influence of queens as a result of their control of egg sex ratio.
Resumo:
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.