113 resultados para Start
Resumo:
Background To replicate, retroviruses must insert DNA copies of their RNA genomes into the host genome. This integration process is catalyzed by the viral integrase protein. The site of viral integration has been shown to be non-random and retrovirus-specific. LEDGF/p75, a splice variant encoded by PSIP1 gene and described as a general transcription coactivator, was identified as a tethering factor binding both to chromatin and to lentiviral integrases, thereby affecting integration efficiency as well as integration site selection. LEDGF/p75 is still a poorly characterized protein, and its cellular endogenous function has yet to be fully determined. In order to start unveiling the roles of LEDGF/p75 in the cell, we started to investigate the mechanisms involved in the regulation of LEDGF/p75. Materials and methods To identify PSIP1 minimal promoter and associated regulatory elements, we cloned a region starting 5 kb upstream the transcription start site (TSS, +1 reference position) to the ATG start codon (+816), as well as systematic truncations, in a plasmid containing the firefly luciferase reporter gene. These constructs were co-transfected into HEK293 cells with a plasmid encoding the Renilla luciferase under the pTK promoter as an internal control for transfection efficiency. Both luciferase activities were assessed by luminescence as an indicator of promoter activity. Results Luciferase assays identified regions -76 to +1 and +1 to +94 as two independent minimal promoters showing respectively a 3.7x and 2.3x increase in luciferase activity. These two independent minimal promoters worked synergistically increasing luciferase activity up to 16.3x as compared to background. Moreover, we identified five regulatory blocks which modulated luciferase activity depending on the DNA region tested, three enhancers (- 2007 to -1159, -284 to -171 and +94 to +644) and two silencers (-171 to -76 and +796 to +816). However, the silencing effect of the region -171 to -76 is dependent on the presence of the +94 to +644 region, ruling out the enhancer activity of the latter. Computational analysis of PSIP1 promoter revealed the absence of TATA box and initiator (INR) sequences, classifying this promoter as nonconventional. TATA-less and INR-less promoters are characterized by multiple Sp1 binding sites, involved in the recruitment of the RNA pol II complex. Consistent with this, PSIP1 promoter contains multiple putative Sp1 binding sequences in regions -76 to +1 and +1 to +94.
Resumo:
A character network represents relations between characters from a text; the relations are based on text proximity, shared scenes/events, quoted speech, etc. Our project sketches a theoretical framework for character network analysis, bringing together narratology, both close and distant reading approaches, and social network analysis. It is in line with recent attempts to automatise the extraction of literary social networks (Elson, 2012; Sack, 2013) and other studies stressing the importance of character- systems (Woloch, 2003; Moretti, 2011). The method we use to build the network is direct and simple. First, we extract co-occurrences from a book index, without the need for text analysis. We then describe the narrative roles of the characters, which we deduce from their respective positions in the network, i.e. the discourse. As a case study, we use the autobiographical novel Les Confessions by Jean-Jacques Rousseau. We start by identifying co-occurrences of characters in the book index of our edition (Slatkine, 2012). Subsequently, we compute four types of centrality: degree, closeness, betweenness, eigenvector. We then use these measures to propose a typology of narrative roles for the characters. We show that the two parts of Les Confessions, written years apart, are structured around mirroring central figures that bear similar centrality scores. The first part revolves around the mentor of Rousseau; a figure of openness. The second part centres on a group of schemers, depicting a period of deep paranoia. We also highlight characters with intermediary roles: they provide narrative links between the societies in the life of the author. The method we detail in this complete case study of character network analysis can be applied to any work documented by an index. Un réseau de personnages modélise les relations entre les personnages d'un récit : les relations sont basées sur une forme de proximité dans le texte, l'apparition commune dans des événements, des citations dans des dialogues, etc. Notre travail propose un cadre théorique pour l'analyse des réseaux de personnages, rassemblant narratologie, close et distant reading, et analyse des réseaux sociaux. Ce travail prolonge les tentatives récentes d'automatisation de l'extraction de réseaux sociaux tirés de la littérature (Elson, 2012; Sack, 2013), ainsi que les études portant sur l'importance des systèmes de personnages (Woloch, 2003; Moretti, 2011). La méthode que nous utilisons pour construire le réseau est directe et simple. Nous extrayons les co-occurrences d'un index sans avoir recours à l'analyse textuelle. Nous décrivons les rôles narratifs des personnages en les déduisant de leurs positions relatives dans le réseau, donc du discours. Comme étude de cas, nous avons choisi le roman autobiographique Les Confessions, de Jean- Jacques Rousseau. Nous déduisons les co-occurrences entre personnages de l'index présent dans l'édition Slatkine (Rousseau et al., 2012). Sur le réseau obtenu, nous calculons quatre types de centralité : le degré, la proximité, l'intermédiarité et la centralité par vecteur propre. Nous utilisons ces mesures pour proposer une typologie des rôles narratifs des personnages. Nous montrons que les deux parties des Confessions, écrites à deux époques différentes, sont structurées autour de deux figures centrales, qui obtiennent des mesures de centralité similaires. La première partie est construite autour du mentor de Rousseau, qui a symbolisé une grande ouverture. La seconde partie se focalise sur un groupe de comploteurs, et retrace une période marquée par la paranoïa chez l'auteur. Nous mettons également en évidence des personnages jouant des rôles intermédiaires, et de fait procurant un lien narratif entre les différentes sociétés couvrant la vie de l'auteur. La méthode d'analyse des réseaux de personnages que nous décrivons peut être appliquée à tout texte de fiction comportant un index.
Resumo:
BACKGROUND: Exclusive liver metastases occur in up to 40% of patients with uveal melanoma associated with a median survival of 2-7 months. Single agent response rates with commonly available chemotherapy are below 10%. We have investigated the use of fotemustine via direct intra-arterial hepatic (i.a.h.) administration in patients with uveal melanoma metastases. PATIENTS AND METHODS: A total of 101 patients from seven centers were treated with i.a.h. fotemustine, administered intra-arterially weekly for a 4-week induction period, and then as a maintenance treatment every 3 weeks until disease progression, unacceptable toxicity or patient refusal. RESULTS: A median of eight fotemustine infusions per patient were delivered (range 1-26). Catheter related complications occurred in 23% of patients; however, this required treatment discontinuation in only 10% of the patients. The overall response rate was 36% with a median overall survival of 15 months and a 2-year survival rate of 29%. LDH, time between diagnosis and treatment start and gender were significant predictors of survival. CONCLUSIONS: Locoregional treatment with fotemustine is well tolerated and seems to improve outcome of this poor prognosis patient population. Median survival rates are among the longest reported and one-third of the patients are still alive at 2 years.
Resumo:
OBJECTIVE: To test a method that allows automatic set-up of the ventilator controls at the onset of ventilation. DESIGN: Prospective randomized crossover study. SETTING: ICUs in one adult and one children's hospital in Switzerland. PATIENTS: Thirty intubated stable, critically ill patients (20 adults and 10 children). INTERVENTIONS: The patients were ventilated during two 20-min periods using a modified Hamilton AMADEUS ventilator. During the control period the ventilator settings were chosen immediately prior to the study. During the other period individual settings were automatically determined by the ventilatior (AutoInit). MEASUREMENTS AND RESULTS: Pressure, flow, and instantaneous CO2 concentration were measured at the airway opening. From these measurements, series dead space (V(DS)), expiratory time constant (RC), tidal volume (VT, total respiratory frequency (f(tot), minute ventilation (MV), and maximal and mean airway pressure (Paw, max and Paw, mean) were calculated. Arterial blood gases were analyzed at the end of each period. Paw, max was significantly less with the AutoInit ventilator settings while f(tot) was significantly greater (P < 0.05). The other values were not statistically significant. CONCLUSIONS: The AutoInit ventilator settings, which were automatically derived, were acceptable for all patients for a period of 20 min and were not found to be inferior to the control ventilator settings. This makes the AutoInit method potentially useful as an automatic start-up procedure for mechanical ventilation.
Resumo:
This doctoral thesis deals with the rise and potential fall of achievement career as an institutional biographical pattern. I start with the assumption that the achievement career, as a result of the spread of large-scale bureaucratic companies, the male-breadwinner family model, and meritocratic ideals, came to life in the first half of the twentieth century. During the so-called 30 glorieuses, it became even a normatively dominant and also politically significant male biographical pattern. But the structural changes that announced the end of the post-war golden age seemed also to threaten and-according to certain scholars-erode this type of occupational trajectory. In order to understand this dynamic I will try to reconstruct the achievement career in Switzerland empirically. I examine (1) the structural changes of the economic field from 1970 to 2000, (2) the transformations of the trajectories during this period, and (3) ways in which the concerned individuals interpret and react to these changes.
Resumo:
Dual-boosted protease inhibitors (DBPI) are an option for salvage therapy for HIV-1 resistant patients. Patients receiving a DBPI in the Swiss HIV Cohort Study between January1996 and March 2007 were studied. Outcomes of interest were viral suppression at 24 weeks. 295 patients (72.5%) were on DBPI for over 6 months. The median duration was 2.2 years. Of 287 patients who had HIV-RNA >400 copies/ml at the start of the regimen, 184 (64.1%) were ever suppressed while on DBPI and 156 (54.4%) were suppressed within 24 weeks. The median time to suppression was 101 days (95% confidence interval 90-125 days). The median number of past regimens was 6 (IQR, 3-8). The main reasons for discontinuing the regimen were patient's wish (48.3%), treatment failure (22.5%), and toxicity (15.8%). Acquisition of HIV through intravenous drug use and the use of lopinavir in combination with saquinavir or atazanavir were associated with an increased likelihood of suppression within 6 months. Patients on DBPI are heavily treatment experienced. Viral suppression within 6 months was achieved in more than half of the patients. There may be a place for DBPI regimens in settings where more expensive alternates are not available.
Resumo:
Anti-CTLA-4 treatment improves the survival of patients with advanced-stage melanoma. However, although the anti-CTLA-4 antibody ipilimumab is now an approved treatment for patients with metastatic disease, it remains unknown by which mechanism it boosts tumor-specific T cell activity. In particular, it is unclear whether treatment amplifies previously induced T cell responses or whether it induces new tumor-specific T cell reactivities. Using a combination ultraviolet (UV)-induced peptide exchange and peptide-major histocompatibility complex (pMHC) combinatorial coding, we monitored immune reactivity against a panel of 145 melanoma-associated epitopes in a cohort of patients receiving anti-CTLA-4 treatment. Comparison of pre- and posttreatment T cell reactivities in peripheral blood mononuclear cell samples of 40 melanoma patients demonstrated that anti-CTLA-4 treatment induces a significant increase in the number of detectable melanoma-specific CD8 T cell responses (P = 0.0009). In striking contrast, the magnitude of both virus-specific and melanoma-specific T cell responses that were already detected before start of therapy remained unaltered by treatment (P = 0.74). The observation that anti-CTLA-4 treatment induces a significant number of newly detected T cell responses-but only infrequently boosts preexisting immune responses-provides strong evidence for anti-CTLA-4 therapy-enhanced T cell priming as a component of the clinical mode of action.
Resumo:
The Helvetic nappe system in Western Switzerland is a stack of fold nappes and thrust sheets em-placed at low grade metamorphism. Fold nappes and thrust sheets are also some of the most common features in orogens. Fold nappes are kilometer scaled recumbent folds which feature a weakly deformed normal limb and an intensely deformed overturned limb. Thrust sheets on the other hand are characterized by the absence of overturned limb and can be defined as almost rigid blocks of crust that are displaced sub-horizontally over up to several tens of kilometers. The Morcles and Doldenhom nappe are classic examples of fold nappes and constitute the so-called infra-Helvetic complex in Western and Central Switzerland, respectively. This complex is overridden by thrust sheets such as the Diablerets and Wildhörn nappes in Western Switzerland. One of the most famous example of thrust sheets worldwide is the Glariis thrust sheet in Central Switzerland which features over 35 kilometers of thrusting which are accommodated by a ~1 m thick shear zone. Since the works of the early Alpine geologist such as Heim and Lugeon, the knowledge of these nappes has been steadily refined and today the geometry and kinematics of the Helvetic nappe system is generally agreed upon. However, despite the extensive knowledge we have today of the kinematics of fold nappes and thrust sheets, the mechanical process leading to the emplacement of these nappe is still poorly understood. For a long time geologist were facing the so-called 'mechanical paradox' which arises from the fact that a block of rock several kilometers high and tens of kilometers long (i.e. nappe) would break internally rather than start moving on a low angle plane. Several solutions were proposed to solve this apparent paradox. Certainly the most successful is the theory of critical wedges (e.g. Chappie 1978; Dahlen, 1984). In this theory the orogen is considered as a whole and this change of scale allows thrust sheet like structures to form while being consistent with mechanics. However this theoiy is intricately linked to brittle rheology and fold nappes, which are inherently ductile structures, cannot be created in these models. When considering the problem of nappe emplacement from the perspective of ductile rheology the problem of strain localization arises. The aim of this thesis was to develop and apply models based on continuum mechanics and integrating heat transfer to understand the emplacement of nappes. Models were solved either analytically or numerically. In the first two papers of this thesis we derived a simple model which describes channel flow in a homogeneous material with temperature dependent viscosity. We applied this model to the Morcles fold nappe and to several kilometer-scale shear zones worldwide. In the last paper we zoomed out and studied the tectonics of (i) ductile and (ii) visco-elasto-plastic and temperature dependent wedges. In this last paper we focused on the relationship between basement and cover deformation. We demonstrated that during the compression of a ductile passive margin both fold nappes and thrust sheets can develop and that these apparently different structures constitute two end-members of a single structure (i.e. nappe). The transition from fold nappe to thrust sheet is to first order controlled by the deformation of the basement. -- Le système des nappes helvétiques en Suisse occidentale est un empilement de nappes de plis et de nappes de charriage qui se sont mis en place à faible grade métamorphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement défor-mé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Mordes et la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glariis en Suisse centrale se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. Aujourd'hui la géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général. Malgré cela, les processus mécaniques par lesquels ces nappes se sont mises en place restent mal compris. Pendant toute la première moitié du vingtième siècle les géologues les géologues ont été confrontés au «paradoxe mécanique». Celui-ci survient du fait qu'un bloc de roche haut de plusieurs kilomètres et long de plusieurs dizaines de kilomètres (i.e., une nappe) se fracturera de l'intérieur plutôt que de se déplacer sur une surface frictionnelle. Plusieurs solutions ont été proposées pour contourner cet apparent paradoxe. La solution la plus populaire est la théorie des prismes d'accrétion critiques (par exemple Chappie, 1978 ; Dahlen, 1984). Dans le cadre de cette théorie l'orogène est considéré dans son ensemble et ce simple changement d'échelle solutionne le paradoxe mécanique (la fracturation interne de l'orogène correspond aux nappes). Cette théorie est étroitement lié à la rhéologie cassante et par conséquent des nappes de plis ne peuvent pas créer au sein d'un prisme critique. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la méca-nique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous avons dérivé un modèle d'écoulement dans un chenal d'un matériel homogène dont la viscosité dépend de la température. Nous avons appliqué ce modèle à la nappe de Mordes et à plusieurs zone de cisaillement d'échelle kilométrique provenant de différents orogènes a travers le monde. Dans le dernier article nous avons considéré le problème à l'échelle de l'orogène et avons étudié la tectonique de prismes (i) ductiles, et (ii) visco-élasto-plastiques en considérant les transferts de chaleur. Nous avons démontré que durant la compression d'une marge passive ductile, a la fois des nappes de plis et des nappes de charriages peuvent se développer. Nous avons aussi démontré que nappes de plis et de charriages sont deux cas extrêmes d'une même structure (i.e. nappe) La transition entre le développement d'une nappe de pli ou d'une nappe de charriage est contrôlé au premier ordre par la déformation du socle. -- Le système des nappes helvétiques en Suisse occidentale est un emblement de nappes de plis et de nappes de chaînage qui se sont mis en place à faible grade métamoiphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement déformé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Morcles and la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glarüs en Suisse centrale est certainement l'exemple de nappe de charriage le plus célèbre au monde. Elle se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. La géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général parmi les géologues. Au contraire les processus physiques par lesquels ces nappes sont mises en place reste mal compris. Les sédiments qui forment les nappes alpines se sont déposés à l'ère secondaire et à l'ère tertiaire sur le socle de la marge européenne qui a été étiré durant l'ouverture de l'océan Téthys. Lors de la fermeture de la Téthys, qui donnera naissance aux Alpes, le socle et les sédiments de la marge européenne ont été déformés pour former les nappes alpines. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la mécanique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous nous sommes intéressés à la localisation de la déformation à l'échelle d'une nappe. Nous avons appliqué le modèle développé à la nappe de Morcles et à plusieurs zones de cisaillement provenant de différents orogènes à travers le monde. Dans le dernier article nous avons étudié la relation entre la déformation du socle et la défonnation des sédiments. Nous avons démontré que nappe de plis et nappes de charriages constituent les cas extrêmes d'un continuum. La transition entre nappe de pli et nappe de charriage est intrinsèquement lié à la déformation du socle sur lequel les sédiments reposent.
Resumo:
Osteoporosis is well recognized as a public health problem in industrialized countries. Because of the efficiency of new treatments to decrease fracture risk, it is of a major interest to detect the patients who should benefit from such treatments. A diagnosis of osteoporosis is necessary before to start a specific treatment. This diagnosis is based on the measurement of the skeleton (hip and spine) with dual X-ray absorptiometry, using diagnostic criteria established by the World Health Organisation (WHO). In Switzerland, indications for bone densitometry are limited to precise situations. This technique cannot be applied for screening. For this purpose, peripheral measurements and particularly quantitative ultrasounds of bone seem to be promising. Indeed, several prospective studies clearly showed their predictive power for hip fracture risk in women aged more than 65 years. In order to facilitate the clinical use of bone ultrasounds, thresholds of risk of fracture and osteoporosis of the hip will be shortly published. This will integrate bone ultrasound in a global concept including bone densitometry and its indications, but also other risk factors for osteoporosis recognized by the Swiss association against osteoporosis (ASCO).
Resumo:
Acute renal failure is a frequent and potentially lethal disease in intensive care units. Renal replacement therapy (RRT) is often required. Either intermittent or continuous methods of RRT can be used. When to start a RRT and which method to use is not always clearly defined and a global evaluation of the clinical situation is required. The choice of the modality of RRT will be up to the general clinical context, hemodynamic stability, the type of molecules to be cleared and the haemorrhagic risk as much as habits and available resources. No study currently showed a superiority of either continuous or intermittent renal replacement therapy. The collaboration between intensive care specialists and nephrologists allows an optimized choice for a given patient and allow better move from one technic to another if required.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
A novel function of NF-kappaB in the development of most ectodermal appendages, including two types of murine pelage hair follicles, was detected in a mouse model with suppressed NF-kappaB activity (c(IkappaBalphaDeltaN)). However, the developmental processes regulated by NF-kappaB in hair follicles has remained unknown. Furthermore, the similarity between the phenotypes of c(IkappaBADeltaN) mice and mice deficient in Eda A1 (tabby) or its receptor EdaR (downless) raised the issue of whether in vivo NF-kappaB regulates or is regulated by these novel TNF family members. We now demonstrate that epidermal NF-kappaB activity is first observed in placodes of primary guard hair follicles at day E14.5, and that in vivo NF-kappaB signalling is activated downstream of Eda A1 and EdaR. Importantly, ectopic signals which activate NF-kappaB can also stimulate guard hair placode formation, suggesting a crucial role for NF-kappaB in placode development. In downless and c(IkappaBalphaDeltaN) mice, placodes start to develop, but rapidly abort in the absence of EdaR/NF-kappaB signalling. We show that NF-kappaB activation is essential for induction of Shh and cyclin D1 expression and subsequent placode down growth. However, cyclin D1 induction appears to be indirectly regulated by NF-kappaB, probably via Shh and Wnt. The strongly decreased number of hair follicles observed in c(IkappaBalphaDeltaN) mice compared with tabby mice, indicates that additional signals, such as TROY, must regulate NF-kappaB activity in specific hair follicle subtypes.
Resumo:
BACKGROUND/AIMS: Gluco-incretin hormones increase the glucose competence of pancreatic beta-cells by incompletely characterized mechanisms. METHODS: We searched for genes that were differentially expressed in islets from control and Glp1r-/-; Gipr-/- (dKO) mice, which show reduced glucose competence. Overexpression and knockdown studies; insulin secretion analysis; analysis of gene expression in islets from control and diabetic mice and humans as well as gene methylation and transcriptional analysis were performed. RESULTS: Fxyd3 was the most up-regulated gene in glucose incompetent islets from dKO mice. When overexpressed in beta-cells Fxyd3 reduced glucose-induced insulin secretion by acting downstream of plasma membrane depolarization and Ca++ influx. Fxyd3 expression was not acutely regulated by cAMP raising agents in either control or dKO adult islets. Instead, expression of Fxyd3 was controlled by methylation of CpGs present in its proximal promoter region. Increased promoter methylation reduced Fxyd3 transcription as assessed by lower abundance of H3K4me3 at the transcriptional start site and in transcription reporter assays. This epigenetic imprinting was initiated perinatally and fully established in adult islets. Glucose incompetent islets from diabetic mice and humans showed increased expression of Fxyd3 and reduced promoter methylation. CONCLUSIONS/INTERPRETATION: Because gluco-incretin secretion depends on feeding the epigenetic regulation of Fxyd3 expression may link nutrition in early life to establishment of adult beta-cell glucose competence; this epigenetic control is, however, lost in diabetes possibly as a result of gluco-incretin resistance and/or de-differentiation of beta-cells that are associated with the development of type 2 diabetes.
Resumo:
We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.
Resumo:
Background: Although CD4 cell count monitoring is used to decide when to start antiretroviral therapy in patients with HIV-1 infection, there are no evidence-based recommendations regarding its optimal frequency. It is common practice to monitor every 3 to 6 months, often coupled with viral load monitoring. We developed rules to guide frequency of CD4 cell count monitoring in HIV infection before starting antiretroviral therapy, which we validated retrospectively in patients from the Swiss HIV Cohort Study.Methodology/Principal Findings: We built up two prediction rules ("Snap-shot rule" for a single sample and "Track-shot rule" for multiple determinations) based on a systematic review of published longitudinal analyses of CD4 cell count trajectories. We applied the rules in 2608 untreated patients to classify their 18 061 CD4 counts as either justifiable or superfluous, according to their prior >= 5% or < 5% chance of meeting predetermined thresholds for starting treatment. The percentage of measurements that both rules falsely deemed superfluous never exceeded 5%. Superfluous CD4 determinations represented 4%, 11%, and 39% of all actual determinations for treatment thresholds of 500, 350, and 200x10(6)/L, respectively. The Track-shot rule was only marginally superior to the Snap-shot rule. Both rules lose usefulness for CD4 counts coming near to treatment threshold.Conclusions/Significance: Frequent CD4 count monitoring of patients with CD4 counts well above the threshold for initiating therapy is unlikely to identify patients who require therapy. It appears sufficient to measure CD4 cell count 1 year after a count > 650 for a threshold of 200, > 900 for 350, or > 1150 for 500x10(6)/L, respectively. When CD4 counts fall below these limits, increased monitoring frequency becomes advisable. These rules offer guidance for efficient CD4 monitoring, particularly in resource-limited settings.