986 resultados para Point Common Coupling
Resumo:
Adapted filamentous pathogens such as the oomycetes Hyaloperonospora arabidopsidis (Hpa) and Phytophthora infestans (Pi) project specialized hyphae, the haustoria, inside living host cells for the suppression of host defence and acquisition of nutrients. Accommodation of haustoria requires reorganization of the host cell and the biogenesis of a novel host cell membrane, the extrahaustorial membrane (EHM), which envelops the haustorium separating the host cell from the pathogen. Here, we applied live-cell imaging of fluorescent-tagged proteins labelling a variety of membrane compartments and investigated the subcellular changes associated with accommodating oomycete haustoria in Arabidopsis and N. benthamiana. Plasma membrane-resident proteins differentially localized to the EHM. Likewise, secretory vesicles and endosomal compartments surrounded Hpa and Pi haustoria revealing differences between these two oomycetes, and suggesting a role for vesicle trafficking pathways for the pathogen-controlled biogenesis of the EHM. The latter is supported by enhanced susceptibility of mutants in endosome-mediated trafficking regulators. These observations point at host subcellular defences and specialization of the EHM in a pathogen-specific manner. Defence-associated haustorial encasements, a double-layered membrane that grows around mature haustoria, were frequently observed in Hpa interactions. Intriguingly, all tested plant proteins accumulated at Hpa haustorial encasements suggesting the general recruitment of default vesicle trafficking pathways to defend pathogen access. Altogether, our results show common requirements of subcellular changes associated with oomycete biotrophy, and highlight differences between two oomycete pathogens in reprogramming host cell vesicle trafficking for haustoria accommodation. This provides a framework for further dissection of the pathogen-triggered reprogramming of host subcellular changes.
Resumo:
Social identity is a double-edged sword. On the one hand, identifying with a social group is a prerequisite for the sharing of common norms and values, solidarity, and collective action. On the other hand, in-group identification often goes together with prejudice and discrimination. Today, these two sides of social identification underlie contradictory trends in the way European nations and European nationals relate to immigrants and immigration. Most European countries are becoming increasingly multicultural, and anti-discrimination laws have been adopted throughout the European Union, demonstrating a normative shift towards more social inclusion and tolerance. At the same time, racist and xenophobic attitudes still shape social relations, individual as well as collective behaviour (both informal and institutional), and political positions throughout Europe. The starting point for this chapter is Sanchez-Mazas' (2004) interactionist approach to the study of racism and xenophobia, which in turn builds on Axel Honneth's (1996) philosophical theory of recognition. In this view, the origin of attitudes towards immigrants cannot be located in one or the other group, but in a dynamic of mutual influence. Sanchez-Mazas' approach is used as a general framework into which we integrate social psychological approaches of prejudice and recent empirical findings examining minority-majority relations. We particularly focus on the role of national and European identities as antecedents of anti-immigrant attitudes held by national majorities. Minorities' reactions to denials of recognition are also examined. We conclude by delineating possible social and political responses to prejudice towards immigrants.
Resumo:
AIMS: Common carotid artery intima-media thickness (CCIMT) is widely used as a surrogate marker of atherosclerosis, given its predictive association with cardiovascular disease (CVD). The interpretation of CCIMT values has been hampered by the absence of reference values, however. We therefore aimed to establish reference intervals of CCIMT, obtained using the probably most accurate method at present (i.e. echotracking), to help interpretation of these measures. METHODS AND RESULTS: We combined CCIMT data obtained by echotracking on 24 871 individuals (53% men; age range 15-101 years) from 24 research centres worldwide. Individuals without CVD, cardiovascular risk factors (CV-RFs), and BP-, lipid-, and/or glucose-lowering medication constituted a healthy sub-population (n = 4234) used to establish sex-specific equations for percentiles of CCIMT across age. With these equations, we generated CCIMT Z-scores in different reference sub-populations, thereby allowing for a standardized comparison between observed and predicted ('normal') values from individuals of the same age and sex. In the sub-population without CVD and treatment (n = 14 609), and in men and women, respectively, CCIMT Z-scores were independently associated with systolic blood pressure [standardized βs 0.19 (95% CI: 0.16-0.22) and 0.18 (0.15-0.21)], smoking [0.25 (0.19-0.31) and 0.11 (0.04-0.18)], diabetes [0.19 (0.05-0.33) and 0.19 (0.02-0.36)], total-to-HDL cholesterol ratio [0.07 (0.04-0.10) and 0.05 (0.02-0.09)], and body mass index [0.14 (0.12-0.17) and 0.07 (0.04-0.10)]. CONCLUSION: We estimated age- and sex-specific percentiles of CCIMT in a healthy population and assessed the association of CV-RFs with CCIMT Z-scores, which enables comparison of IMT values for (patient) groups with different cardiovascular risk profiles, helping interpretation of such measures obtained both in research and clinical settings.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
The obesity epidemic is associated with the recent availability of highly palatable and inexpensive caloric food as well as important changes in lifestyle. Genetic factors, however, play a key role in regulating energy balance and numerous twin studies have estimated the BMI heritability between 40 and 70%. While common variants, identified through genome-wide association studies (GWAS) point toward new pathways, their effect size are too low to be of any use in the clinic. This review therefore concentrates on genes and genomic regions associated with very high risks of human obesity. Although there are no consensus guidelines, we review how the knowledge on these "causal factors" can be translated into the clinic for diagnostic purposes. We propose genetic workups guided by clinical manifestations in patients with severe early-onset obesity. While etiological diagnoses are unequivocal in a minority of patients, new genomic tools such as Comparative Genomic Hybridization (CGH) array, have allowed the identification of novel "causal" loci and next-generation sequencing brings the promise of accelerated pace for discoveries relevant to clinical practice.
Resumo:
City Audit Report
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.