53 resultados para Dependent Failures, Interactive Failures, Interactive Coefficients, Reliability, Complex System
Resumo:
Kidneys are the main regulator of salt homeostasis and blood pressure. In the distal region of the tubule active Na-transport is finely tuned. This transport is regulated by various hormonal pathways including aldosterone that regulates the reabsorption at the level of the ASDN, comprising the late DCT, the CNT and the CCD. In the ASDN, the amiloride-sensitive epithelial Na-channel (ENaC) plays a major role in Na-homeostasis, as evidenced by gain-of function mutations in the genes encoding ENaC, causing Liddle's syndrome, a severe form of salt-sensitive hypertension. In this disease, regulation of ENaC is compromised due to mutations that delete or mutate a PY-motif in ENaC. Such mutations interfere with Nedd4-2- dependent ubiquitylation of ENaC, leading to reduced endocytosis of the channel, and consequently to increased channel activity at the cell surface. After endocytosis ENaC is targeted to the lysosome and rapidly degraded. Similarly to other ubiquitylated and endocytosed plasma membrane proteins (such as the EGFR), it is likely that the multi-protein complex system ESCRT is involved. To investigate the involvement of this system we tested the role of one of the ESCRT proteins, Tsg101. Here we show that Tsg101 interacts endogenously and in transfected HEK-293 cells with all three ENaC sub-units. Furthermore, mutations of cytoplasmic lysines of ENaC subunits lead to the disruption of this interaction, indicating a potential involvement of ubiquitin in Tsg101 / ENaC interaction. Tsg101 knockdown in renal epithelial cells increases the total and cell surface pool of ENaC, thus implying TsglOl and consequently the ESCRT system in ENaC degradation by the endosomal/lysosomal system. - Les reins sont les principaux organes responsables de la régulation de la pression artérielle ainsi que de la balance saline du corps. Dans la région distale du tubule, le transport actif de sodium est finement régulé. Ce transport est contrôlé par plusieurs hormones comme l'aldostérone, qui régule la réabsorption au niveau de l'ASDN, segment comprenant la fin du DCT, le CNT et le CCD. Dans l'ASDN, le canal à sodium épithélial sensible à l'amiloride (ENaC) joue un rôle majeur dans l'homéostasie sodique, comme cela fut démontré par les mutations « gain de fonction » dans les gênes encodant ENaC, causant ainsi le syndrome de Liddle, une forme sévère d'hypertension sensible au sel. Dans cette maladie, la régulation d'ENaC est compromise du fait des mutations qui supprime ou mute le domaine PY présent sur les sous-unités d'ENaC. Ces mutations préviennent l'ubiquitylation d'ENaC par Nedd4-2, conduisant ainsi à une baisse de l'endocytose du canal et par conséquent une activité accrue d'ENaC à la surface membranaire. Après endocytose, ENaC est envoyé vers le lysosome et rapidement dégradé. Comme d'autres protéines membranaires ubiquitylées et endocytées (comme l'EGFR), il est probable que le complexe multi-protéique ESCRT est impliqué dans le transport d'ENaC au lysosome. Pour étudier l'implication du système d'ESCRT dans la régulation d'ENaC nous avons testé le rôle d'une protéine de ces complexes, TsglOl. Notre étude nous a permis de démontrer que TsglOl se lie aux trois sous-unités ENaC aussi bien en co-transfection dans des cellules HEK-293 que de manière endogène. De plus, nous avons pu démontrer l'importance de l'ubiquitine dans cette interaction par la mutation de toutes les lysines placées du côté cytoplasmique des sous-unités d'ENaC, empêchant ainsi l'ubiquitylation de ces sous-unités. Enfin, le « knockdown » de TsglOl dans des cellules épithéliales de rein induit une augmentation de l'expression d'ENaC aussi bien dans le «pool» total qu'à la surface membranaire, indiquant ainsi un rôle pour TsglOl et par conséquent du système d'ESCRT dans la dégradation d'ENaC par la voie endosome / lysosome. - Le corps humain est composé d'organes chacun spécialisé dans une fonction précise. Chaque organe est composé de cellules, qui assurent la fonction de l'organe en question. Ces cellules se caractérisent par : - une membrane qui leur permet d'isoler leur compartiment interne (milieu intracellulaire ou cytoplasme) du liquide externe (milieu extracellulaire), - un noyau, où l'ADN est situé, - des protéines, sortent d'unités fonctionnelles ayant une fonction bien définie dans la cellule. La séparation entre l'extérieure et l'intérieure de la cellule est essentielle pour le maintien des composants de ces milieux ainsi que pour la bonne fonction de l'organisme et des cellules. Parmi ces composants, le sodium joue un rôle essentiel car il conditionne le maintien de volume sanguin en participant au maintien du volume extracellulaire. Une augmentation du sodium dans l'organisme provoque donc une augmentation du volume sanguin et ainsi provoque une hypertension. De ce fait, le contrôle de la quantité de sodium présente dans l'organisme est essentiel pour le bon fonctionnement de l'organisme. Le sodium est apporté par l'alimentation, et c'est au niveau du rein que va s'effectuer le contrôle de la quantité de sodium qui va être retenue dans l'organisme pour le maintien d'une concentration normale de sodium dans le milieu extracellulaire. Le rein va se charger de réabsorber toutes sortes de solutés nécessaires pour l'organisme avant d'évacuer les déchets ou le surplus de ces solutés en produisant l'urine. Le rein va se charger de réabsorber le sodium grâce à différentes protéines, parmi elle, nous nous sommes intéressés à une protéine appelée ENaC. Cette protéine joue un rôle important dans la réabsorption du sodium, et lorsqu'elle fonctionne mal, comme il a pu être observé dans certaines maladies génétiques, il en résulte des problèmes d'hypo- ou d'hypertension. Les problèmes résultant du mauvais fonctionnement de cette protéine obligent donc la cellule à réguler efficacement ENaC par différents mécanismes, notamment en diminuant son expression et en dégradant le « surplus ». Dans cette travail de thèse, nous nous sommes intéressés au mécanisme impliqué dans la dégradation d'ENaC et plus précisément à un ensemble de protéines, appelé ESCRT, qui va se charger « d'escorter » une protéine vers un sous compartiment à l'intérieur de la cellule ou elle sera dégradée.
Resumo:
Peroxynitrite is a potent oxidant and nitrating species formed from the reaction between the free radicals nitric oxide and superoxide. An excessive formation of peroxynitrite represents an important mechanism contributing to cell death and dysfunction in multiple cardiovascular pathologies, such as myocardial infarction, heart failure and atherosclerosis. Whereas initial works focused on direct oxidative biomolecular damage as the main route of peroxynitrite toxicity, more recent evidence, mainly obtained in vitro, indicates that peroxynitrite also behaves as a potent modulator of various cell signal transduction pathways. Due to its ability to nitrate tyrosine residues, peroxynitrite affects cellular processes dependent on tyrosine phosphorylation. Peroxynitrite also exerts complex effects on the activity of various kinases and phosphatases, resulting in the up- or downregulation of signalling cascades, in a concentration- and cell-dependent manner. Such roles of peroxynitrite in the redox regulation of key signalling pathways for cardiovascular homeostasis, including protein kinase B and C, the MAP kinases, Nuclear Factor Kappa B, as well as signalling dependent on insulin and the sympatho-adrenergic system are presented in detail in this review.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Unraveling the effect of selection vs. drift on the evolution of quantitative traits is commonly achieved by one of two methods. Either one contrasts population differentiation estimates for genetic markers and quantitative traits (the Q(st)-F(st) contrast) or multivariate methods are used to study the covariance between sets of traits. In particular, many studies have focused on the genetic variance-covariance matrix (the G matrix). However, both drift and selection can cause changes in G. To understand their joint effects, we recently combined the two methods into a single test (accompanying article by Martin et al.), which we apply here to a network of 16 natural populations of the freshwater snail Galba truncatula. Using this new neutrality test, extended to hierarchical population structures, we studied the multivariate equivalent of the Q(st)-F(st) contrast for several life-history traits of G. truncatula. We found strong evidence of selection acting on multivariate phenotypes. Selection was homogeneous among populations within each habitat and heterogeneous between habitats. We found that the G matrices were relatively stable within each habitat, with proportionality between the among-populations (D) and the within-populations (G) covariance matrices. The effect of habitat heterogeneity is to break this proportionality because of selection for habitat-dependent optima. Individual-based simulations mimicking our empirical system confirmed that these patterns are expected under the selective regime inferred. We show that homogenizing selection can mimic some effect of drift on the G matrix (G and D almost proportional), but that incorporating information from molecular markers (multivariate Q(st)-F(st)) allows disentangling the two effects.
Resumo:
This article describes the composition of fingermark residue as being a complex system with numerous compounds coming from different sources and evolving over time from the initial composition (corresponding to the composition right after deposition) to the aged composition (corresponding to the evolution of the initial composition over time). This complex system will additionally vary due to effects of numerous influence factors grouped in five different classes: the donor characteristics, the deposition conditions, the substrate nature, the environmental conditions and the applied enhancement techniques. The initial and aged compositions as well as the influence factors are thus considered in this article to provide a qualitative and quantitative review of all compounds identified in fingermark residue up to now. The analytical techniques used to obtain these data are also enumerated. This review highlights the fact that despite the numerous analytical processes that have already been proposed and tested to elucidate fingermark composition, advanced knowledge is still missing. Thus, there is a real need to conduct future research on the composition of fingermark residue, focusing particularly on quantitative measurements, aging kinetics and effects of influence factors. The results of future research are particularly important for advances in fingermark enhancement and dating technique developments.
Resumo:
This article describes the composition of fingermark residue as being a complex system with numerous compounds coming from different sources and evolving over time from the initial composition (corresponding to the composition right after deposition) to the aged composition (corresponding to the evolution of the initial composition over time). This complex system will additionally vary due to effects of numerous influence factors grouped in five different classes: the donor characteristics, the deposition conditions, the substrate nature, the environmental conditions and the applied enhancement techniques. The initial and aged compositions as well as the influence factors are thus considered in this article to provide a qualitative and quantitative review of all compounds identified in fingermark residue up to now. The analytical techniques used to obtain these data are also enumerated. This review highlights the fact that despite the numerous analytical processes that have already been proposed and tested to elucidate fingermark composition, advanced knowledge is still missing. Thus, there is a real need to conduct future research on the composition of fingermark residue, focusing particularly on quantitative measurements, aging kinetics and effects of influence factors. The results of future research are particularly important for advances in fingermark enhancement and dating technique developments.
Resumo:
Being repeatedly confronted to very difficult situations since childhood influences the way indivuals will later respond to even mildly stressful events. The hypothalamic-pituitary-adrenal axis (HPA) is a complex system implicated in regulating neuroendocrine responses to stress. Its activation produces among others the <stress hormonea, cortisol. However, the regulation of the physiological response to stress depends on psychological factors linked with the representations that individuals develop regarding their close relationships i.e. attachment. Furthermore, attachment representations seem to be associated with oxytocin, a hormone involved both in cortisol reduction and in positive social behaviours.
Resumo:
Thousands of chemical compounds enter the natural environment but many have unknown effects and consequences, in particular at low concentrations. This thesis work contributes to our understanding of pollution effects by using bacteria as test organisms. Bacteria are important for this question because some of them degrade and transform pollutants into less harmful compounds, but secondly because they themselves can be inhibited in their reproduction by exposure to toxic compounds. When inhibitory effects occur this may change the composition of the microbial com¬munity in the long run, leading to altered or diminished ecosystem services by those communities. As a result chemicals of anthropogenic origin may accumulate and per¬sist in the environment, and finally, affect higher organisms as well. In addition to acquiring basic understanding of pollutant effects at low concentrations on bacterial communities an applied goal of this thesis work was to develop bacteria-based tests to screen new organic chemicals for toxicity and biodégradation. In the first part of this work we developed a flow cytometry-based assay on SYT09 plus ethidium-bromide or propidium-iodide stained cells of Pseudomonas ûuorescens exposed or not to a variety of pollutants under oligotrophic growth conditions. Flow cytometry (FC) allows fast and accurate counting of bacterial cells under simul¬taneous assessment of their physiological state, in particular in combination with different fluorescent dyes. Here we employed FC and fluorescent dyes to monitor the effect that pollutants may exert on Pseudomonas ûuorescens SV3. First we designed an oligotrophic growth test, which enabled us to follow population growth at low densities (104 - 10 7 cells per ml) using 0.1 mM sodium acetate as carbon source. Cells in the oligotrophic milieu were then exposed or not to a variety of common pollutants, such as 2-chlorobiphenyl (2CBP), naphthalene (NAH), 4-chlorophenol (4CP), tetradecane (TD), mercury chloride (HgCl2) or benzene, in different dosages. Exposed culture samples were stained with SYT09 (green fluorescent dye binding nucleic acids, generally staining all cells) in combination with propidium iodide (PI) or ethidium bromide (EB), both dyes being membrane integrity indicators. We ob- served that most of the tested compounds decreased population growth in a dosage- dependent manner. SYT09/PI or SYT09/EB staining then revealed that chemical exposure led to arisal of subpopulations of live and injured or dead cells. By modeling population growth on the total cell numbers in population or only the subpopulation of live cells we inferred that even in stressed populations live cells multiply at rates no different to unexposed controls. The net decrease in population growth would thus be a consequence of more and more cells being not able to multiply at all, rather than all cells multiplying at slower rates. In addition, the proportion of injured cells correlated to the compound dosage. We concluded that the oligotrophic test may be useful to asses toxicity of unknown chemicals on a variety of model bacteria. Mul¬tiple tests can be run in parallel and effects are rapidly measured within a period of 8 hours. Interestingly, in the same exposure tests with P. fluorescens SV3 we observed that some chemicals which did not lead to a reduction of net population growth rates did cause measurable effects on live cells. This was mainly observed in cells within the live subpopulation as an increase of the EB fluorescence signal. We showed that SYT09/EB is a more useful combination of dyes than SYT09/PI because PI fluorescence tend to increase only when cells are effectively dead, but not so much in live cells (less then twofold). In contrast, EB geometric mean fluorescence in live cells increased up to eightfold after exposure to toxic compounds. All compounds even at the lowest concentration caused a measurable increase in EB geometric mean fluorescence especially after 2 h incubation time. This effect was found to be transient for cells exposed to 2CBP and 4CP, but chronic for cells incubated with TD and NAH (ultimately leading to cell death). In order to understand the mechanism underlying the observed effects we used known membrane or energy uncouplers. The pattern of EB signal increase in chemical-exposed populations resembled mostly that of EDTA, although EB fluorescence in EDTA-treated or pasteurized cells was even higher than after exposure to the four test chemicals. We conclude that the ability of cells to efflux EB under equilibrium conditions is an appropriate measure for the potential of a chemical to exert toxicity. Since most bacterial species possess efflux systems for EB that all require cellular energy, our test should be more widely relevant to infer toxicity effects of chemical exposure on the physiological status of the bacterial cell. To better understand the effect of toxicant exposure on efflux defense systems, we studied 2-hydroxybiphenyl toxicity to Pseudomonas azeiaica HBP1. We showed that 2-HBP exerts toxicity even to P. azelaica HBP1, but only at concentrations higher than 0.5 mM. Above this concentration transient loss of membrane polarization and integrity occurred, which we conclude from staining of growing cells with fluorescent dyes. Cells finally recover and resume growth on 2HBP. The high resistance of P. azelaica HBP1 to 2-HBP was found to be the result of an efficient MexABOprM- type efflux pump system counteracting passive influx of this compound into the membrane and cellular interior. Mutants with disrupted mexA, mexB and oprM genes did no longer grow on 2-HBP at concentrations above 100 μΜ, whereas below this concentration we found 2-HBP-concentration dependent decrease of growth rate. The MexAB-OprM system in P. azeiaica HBP1 is indeed an efflux pump for ethidium bromide as well. By introducing gfp reporter fusions responsive to intracellular 2- HBP concentrations into HBP1 wild-type or the mutants we demonstrated that 2HBP enters into the cells in a similar way. In contrast, the reporter system in the wild-type cells does not react to 2-HBP at an outside concentration of 2.4 μΜ, whereas in mutant cells it does. This suggests that wild-type cells pump 2-HBP to the outside very effectively preventing accumulation of 2-HBP. 2HBP metabolism, therefore, is not efficient enough to lower the intracellular concentration and prevent toxicity. We conclude that P. azelaica HBP1 resistance to 2-HBP is mainly due to an efficient efflux system and that 2HBP in high concentrations exerts narcotic effects on the bacterial membrane. In the part of this thesis, we investigated the possibilities of bacteria to degrade pollutants at low concentrations (1 mg per L and below). As test components we used 2-hydroxybiphenyl, antibiotics and a variety of fragrances, many of which are known to be difficult to biodegrade. By using accurate counting of low numbers of bacterial cells we could demonstrate that specific growth on these compounds is possible. We demonstrated the accuracy of FC counting at low cell numbers (down to 103 bacterial cells per ml). Then we tested whether bacterial population growth could be specifically monitored at the expense of low substrate concentrations, us¬ing P. azelaica HBP1. A perfect relationship was found between growth rate, yield and 2-HBP concentrations in the range of 0.1 up to 5 mg per L. Mixing P. azelaica within sludge, however, suggested that growth yields in a mixed community can be much lower than in pure culture, perhaps because of loss of metabolic intermediates. We then isolated new strains from activated sludge using 2-HBP or antibiotics (Nal, AMP, SMX) at low concentrations (0.1-1 mg per L) as sole carbon and energy sub¬strate and PAO microdishes. The purified strains were then examined for growth on their respective substrate, which interestingly, showed that all strains can not with¬stand higher than 1 or 10 mg per L concentrations of target substrate. Thus, bacteria must exist that contribute to compound degradation at low pollutant concentrations but are inhibited at higher concentrations. Finally we tested whether specific biomass growth (in number of cells) at the expense of pollutants can also be detected with communities as starting material. Hereto, we focused on a number of fragrance chemicals and measured community biomass increase by flow cytometry cell counting on two distinct starter communities: (i) diluted Lake Geneva water, and dilute activated sludge from a wastewater treatment plant. We observed that most of the test compounds indeed resulted in significant biomass increase in the starter community compared to a no-carbon added control, but activated sludge and lake Geneva water strongly differed (almost mutually ex¬clusive) in their capacity to degrade the test chemicals. In two cases for activated sludge the same type of microbial community developed upon compound exposure, as concluded from transcription fragment length polymorphism analysis on community purified and PCR amplified 16S rRNA gene fragments. To properly test compound biodegradability it is thus important to use starter communities of different origin. We conclude that FC counting can be a valuable tool to screen chemicals for their biodegradability and toxicity. - Des milliers de produits chimiques sont libérés dans l'environnement mais beaucoup ont des effets inconnus, en particulier à basses concentrations. Ce travail de thèse contribue à notre comprehension des effets de la pollution en utilisant des bacteries comme des organismes-tests. Les bacteries sont importantes pour etudier cette ques¬tion car certaines d'entre elles peuvent degrader ou transformer les polluants, mais également parce qu'elles-mmes peuvent tre inhibees dans leur reproduction après avoit ete exposees à ces composes toxiques. Quand des effets inhibiteurs ont lieu, la composition de la communauté microbienne peut tre changee à long terme, ce qui mène à une reduction du service d'ecosystème offert par ces communautés. En consequence, après leur liberation dans l'environnement, les produits chimiques d'origine anthropogenique peuvent soit s'y accumuler et per¬sister, exerant ainsi des effets encore inconnus sur les organismes vivants. En plus d'acquérir des connaissances de base sur les effets des polluants à basses concentra¬tions sur les communautés microbiennes, un but applique de cette thèse était de développer des tests bases sur les bacteries afin d'identifier de nouveau composes pour leur toxicité ou leur biodégradation. Dans la première partie de ce travail, nous avons developpe un test base sur la cytometrie de flux (FC) sur des cellules de Pseudomonas fluorescens colorees par du bromure d'ethidium ou de l'iodure de propidium et exposees ou non à une palette de polluants sous des conditions de croissance oligotrophique. La cytometrie de flux est une technique qui connaît de nombreuses applications dans la microbiologie environ¬nementale. Cela est principalement du au fait qu'elle permet un comptage rapide et precis ainsi que l'évaluation de l'état physiologique, en particulier lorsqu'elle est combinée h des colorations fluorescentes. Ici, nous avons utilise la technique FC et des colorants fluorescents afin de mesurer l'effet que peuvent exercer certains pollu¬ants sur Pseudomonas ûuorescens SV3 . D'abord nous avons conu des tests oligo- trophiques qui nous permettent de suivre la croissance complète de cellules en culture h des densites faibles (104 -10 7 cellules par ml), sur de l'acetate de sodium à 0.1 mM, en presence ou absence de produits chimiques (2-chlorobiphenyl (2CBP), naphthalène (NAH), 4-chlorophenol (4CP), tetradecane (TD), chlorure de mercure(II) (HgCl2)) à différentes concentrations. Afin de montrer le devenir des bacteries tant au niveau de la cellule individuelle que celui de la population globale, après exposition à des series de composes chimiques, nous avons compte les cellules colorees avec du SYT09 (col¬orant fluorescent vert des acides nucléiques pour la discrimination des cellules par rapport au bruit de fond) en combinaison avec l'iodure de propidium (PI) ou le bromure d'ethidium (EB), indicateurs de l'intégrité de la membrane cellulaire avec FC. Nous avons observe que de nombreux composes testes avaient un effet sur la croissance bacterienne, resultant en une baisse du taux de reproduction de la pop¬ulation. En outre, la double coloration que nous avons utilisee dans cette etude SYT09/PI ou SYT09/EB a montre que les produits chimiques testes induisaient une reponse heterogène des cellules dans la population, divisant celle-ci en sous- populations "saine", "endommagee" ou "morte". Les nombres de cellules à partir du comptage et de la proportion de celles "saines" et "endommagees/mortes" ont ensuite ete utilises pour modeliser la croissance de P. ûuorescens SV3 exposee aux produits chimiques. La reduction nette dans la croissance de population est une consequence du fait que de plus en plus de cellules sont incapables de se reproduire, plutt que du fait d'une croissance plus lente de l'ensemble de la population. De plus, la proportion de cellules endommagees est correllee au dosage du compose chimique. Les résultats obtenus nous ont permis de conclure que le test oligotrophique que nous avons developpe peut tre utilise pour l'évaluation de la toxicité de produits chimiques sur différents modèles bacteriens. Des tests multiples peuvent tre lances en parallèle et les effets sont mesures en l'espace de huit heures. Par ailleurs, nous en déduisons que les produits chimiques exercént un effet sur la croissance des cellules de P. ûuorescens SV3, qui est heterogène parmi les cellules dans la population et depend du produit chimique. Il est intéressant de noter que dans les mmes tests d'exposition avec P. ûuorescens SV3, nous avons observe que certains composes qui n'ont pas conduit à une reduction du taux de la croissance nette de la population, ont cause des effets mesurables sur les cellule saines. Ceci a ete essentiellement observe dans la portion "saine" des cellules en tant qu'augmentation du signal de la fluorescence de 1ΈΒ. D'abord nous avons montre que SYT09/EB était une com¬binaison de colorants plus utile que celle de SYT09/PI parce que la fluorescence du PI a tendance à augmenter uniquement lorsque les cellules sont effectivement mortes, et non pas dans les cellules saines (moins de deux fois plus). Par opposi¬tion, la fluorescence moyenne de l'EB dans les cellules saines augmente jusqu'à huit fois plus après exposition aux composes toxiques. Tous les composes, mme aux plus basses concentrations, induisent une augmentation mesurable de la fluorescence moy¬enne de 1ΈΒ, plus particulièrement après deux heures d'incubation. Cet effet s'est revele tre transitoire pour les cellules exposees aux 2CNP et 4CP, mais est chro¬nique pour les cellules incubees avec le TD et le NAH (entranant la mort cellulaire). Afin de comprendre les mécanismes qui sous-tendent les effets observes, nous avons utilise des decoupleurs d'energie ou de membrane. L'augmentation du signal EB dans les populations causee par des produits chimiques ressemblait à celle exerce par le chelateur des ions divalents EDTA. Cependant, les intensités du signal EB des cellules exposees aux produits chimiques testees n'ont jamais atteint les valeurs des cellules traitees avec l'EDTA ou pasteurises. Nous en concluons que le test oli- gotrophique utilisant la coloration (SYT09/)EB des cellules exposees ou non à un produit chimique est utile afin d'evaluer l'effet toxique exerce par les polluants sur la physiologie bacterienne. Afin de mieux comprendre la reaction d'un système de defense par pompe à efflux après exposition à une toxine, nous avons étudié la toxicité du 2-hydroxybiphenyl (2-HBP) sur Pseudomonas azeiaica HBP1. Nous avons montre que le 2-HBP exerce une toxicité mme sur HBP1, mais uniquement à des concentrations supérieures à 0.5 mM. Au-dessus de cette concentration, des pertes transitoires d'intégrité et de polarization membranaire ont lieu, comme cela nous a ete montre par coloration des cellules en croissance. Les cellules sont finalement capables de se rétablir et de reprendre leur croissance sur 2-HBP. La forte resistance de P. azeiaica HBP1 h 2-HBP physiologie bacterienne s'est revele tre le résultat d'un système de pompe h efflux de type MexABOprM qui contre-balance l'influx passif de ce compose h travers la membrane. Nous avons montre, en construisant des mutants avec des insertions dans les gènes mexA, mexB and oprM et des fusions avec le gène rapporteur gfp, que l'altération de n'importe quelle partie du système d'efflux conduisait à accroître l'accumulation de 2-HBP dans la cellule, en comparaison avec la souche sauvage HBP1, provoquant une diminution de la resistance au 2-HBP ainsi qu'une baisse du taux de reproduction des cellules. Des systèmes d'efflux similaires sont répandus chez de nombreuses espèces bactériennes. Ils seraient responsables de la resistance aux produits chimiques tels que les colorants fluorescents (bromure d'ethidium) et des antibiotiques. Nous concluons que la resistance de P. azelaica HBP1 à 2-HBP est principalement due à un système d'efflux efficace et que 2-HBP, à des concentrations elevees, exerce un effet deletère sur la membrane bacterienne. En se basant sur le comptage des cellules avec la FC, nous avons developpe ensuite une methode pour evaluer la biodegradabilite de polluants tels que le 2-HBP ainsi que les antibiotiques (acide nalidixique (Nal), ampicilline (AMP) ou sulfamethoxazole (SMX)) à de faibles concentrations lmg par L et moins), par le suivi de la croissance spécifique sur le compose de cultures microbiennes pures et mixtes. En utilisant un comptage precis de faibles quantités de cellules nous avons pu demontrer que la croissance spécifique sur ces composes est possible. Nous avons pu illustrer la precision du comptage par cytometrie de flux à faible quantité de cellules (jusqu'à 10 3 cellules par ml). Ensuite, nous avons teste s'il était possible de suivre dynamiquement la croissance de la population de cellules sur faibles concentrations de substrats, en utilisant P. azelaica HBP1. Une relation parfaite a ete trouvee entre le taux de croissance, le rendement et les concentrations de 2-HBP (entre 0.1 et 5 mg par L). En mélangeant HBP1 à de la boue active, nous avons pu montrer que le rendement en communauté mixtes pouvait tre bien inférieur qu'en culture pure. Ceci étant peut tre le résultat d'une perte d'intermédiaires métaboliques. Nous avons ensuite isole de nouvelles souches à partir de la boue active en utilisant le 2-HBP ou des antibiotiques (Nal, AMP, SMX) h basses concentrations (0.1-1 mg par L) comme seules sources de carbone et d'energie. En combinaison avec ceci, nous avons également utilise des microplaques PAO. Les souches purifiees ont ensuite ete examinees pour leurs croissances sur leurs substrats respectifs. De faon intéressante, toutes ces souches ont montre qu'elles ne pouvaient pas survivre à des concentrations de substrats supérieures à 1 ou 10 mg par L. Ainsi, il existe des bacteries qui contribuent à la degradation de composes à basses concentrations de polluant mais sont inhibes lorsque ces concentrations deviennent plus hautes. Finalement, nous avons cherche à savoir s'il est possible de detecter une croissance spécifique à une biomasse au depend d'un polluant, en partant d'une communauté microbienne. Ainsi, nous nous sommes concentre sur certains composes et avons mesure l'augmentation de la biomasse d'une communauté grce à la cytometrie de flux. Nous avons compte deux communautés de depart distinctes: (i) une dilution d'eau du Lac Léman, et une dilution de boue active d'une station d'épuration. Nous avons observe que la plupart des composes testes ont entrane une augmentation de la biomasse de depart par rapport au control sans addition de source de carbone. Néanmoins, les échantillons du lac Léman et de la station d'épuration différaient largement (s'excluant mutuellement l'un l'autre) dans leur capacité à degrader les composes chimiques. Dans deux cas provenant de la station d'épuration, le mme type de communauté microbienne s'est developpe après exposition aux composes, comme l'a démontré l'analyse TRFLP sur les fragments d'ARN 16S purifie de la communauté et amplifie par PCR. Afin de tester correctement la biodegradabilite d'un compose, il est donc important d'utiliser des communautés de depart de différentes origines Nous en concluons que le comptage par cytometrie de flux peut tre un outil de grande utilité pour mettre en valeur la biodegradabillite et la toxicité des composes chimiques.
Resumo:
How glucose sensing by the nervous system impacts the regulation of β cell mass and function during postnatal development and throughout adulthood is incompletely understood. Here, we studied mice with inactivation of glucose transporter 2 (Glut2) in the nervous system (NG2KO mice). These mice displayed normal energy homeostasis but developed late-onset glucose intolerance due to reduced insulin secretion, which was precipitated by high-fat diet feeding. The β cell mass of adult NG2KO mice was reduced compared with that of WT mice due to lower β cell proliferation rates in NG2KO mice during the early postnatal period. The difference in proliferation between NG2KO and control islets was abolished by ganglionic blockade or by weaning the mice on a carbohydrate-free diet. In adult NG2KO mice, first-phase insulin secretion was lost, and these glucose-intolerant mice developed impaired glucagon secretion when fed a high-fat diet. Electrophysiological recordings showed reduced parasympathetic nerve activity in the basal state and no stimulation by glucose. Furthermore, sympathetic activity was also insensitive to glucose. Collectively, our data show that GLUT2-dependent control of parasympathetic activity defines a nervous system/endocrine pancreas axis that is critical for β cell mass establishment in the postnatal period and for long-term maintenance of β cell function.
Resumo:
OBJECTIVE: The aim of the study was to validate a French adaptation of the 5th version of the Addiction Severity Index (ASI) instrument in a Swiss sample of illicit drug users. PARTICIPANTS AND SETTING: The participants in the study were 54 French-speaking dependent patients, most of them with opiates as the drug of first choice. Procedure: Analyses of internal consistency (convergent and discriminant validity) and reliability, including measures of test-retest and inter-observer correlations, were conducted. RESULTS: Besides good applicability of the test, the results on composite scores (CSs) indicate comparable results to those obtained in a sample of American opiate-dependent patients. Across the seven dimensions of the ASI, Cronbach's alpha ranged from 0.42 to 0.76, test-retest correlations coefficients ranged from 0.48 to 0.98, while for CSs, inter-observer correlations ranged from 0.76 to 0.99. CONCLUSIONS: Despite several limitations, the French version of the ASI presents acceptable criteria of applicability, validity and reliability in a sample of drug-dependent patients.
Resumo:
AIM: The aim of this study was to evaluate a new pedagogical approach in teaching fluid, electrolyte and acid-base pathophysiology in undergraduate students. METHODS: This approach comprises traditional lectures, the study of clinical cases on the web and a final interactive discussion of these cases in the classroom. When on the web, the students are asked to select laboratory tests that seem most appropriate to understand the pathophysiological condition underlying the clinical case. The percentage of students having chosen a given test is made available to the teacher who uses it in an interactive session to stimulate discussion with the whole class of students. The same teacher used the same case studies during 2 consecutive years during the third year of the curriculum. RESULTS: The majority of students answered the questions on the web as requested and evaluated positively their experience with this form of teaching and learning. CONCLUSIONS: Complementing traditional lectures with online case-based studies and interactive group discussions represents, therefore, a simple means to promote the learning and the understanding of complex pathophysiological mechanisms. This simple problem-based approach to teaching and learning may be implemented to cover all fields of medicine.
Resumo:
Objective: To test the efficacy of teaching motivational interviewing (MI) to medical students. Methods: Thirteen 4th year medical students volunteered to participate. Seven days before and 7 days after an 8-hour interactive training MI workshop, each student performed a videorecorded interview with two standardized patients: a 60 year old alcohol dependent woman and a 50 year old cigarette smoking man. Students' counseling skills were coded by two blinded clinicians using the Motivational Interviewing Treatment Integrity 3.0 (MITI). Inter-rater reliability was calculated for all interviews and a test-retest was completed in a sub-sample of 10 consecutive interviews three days apart. Difference between MITI scores before and after training were calculated and tested using non-parametric tests. Effect size was approximated by calculating the probability that posttest scores are greater than pretest scores (P*=P(Pre<Post)+1/2P(Pre=Post)), P*>1/2 indicating greater scores in posttest, P*=1/2 no effect, and P*<1/2 smaller scores in posttest. Results: Median differences between MITI scores before and after MI training indicated a general progression in MI skills: MI spirit global score (median difference=1.5, Inter quartile range=1.5, p<0.001, P*=0.90); Empathy global score (med diff=1, IQR=0.5, p<0.001, P*=0.85); Percentage of MI adherent skills (med diff=36.6, IQR=50.5, p<0.001, P*=0.85); Percentage of open questions (med diff=18.6, IQR=21.6, p<0.001, P*=0.96); reflections/ questions ratio (med diff=0.2, IQR=0.4, p<0.001, P*=0.81). Only Direction global score and the percentage of complex reflections were not significantly improved (med diff=0, IQR=1, p=0.53, P*=0.44, and med diff=4.3, IQR=24.8, p=0.48, P*=0.62, respectively). Inter-rater reliability indicated weighted kappa ranged between 0.14 for Direction to 0.51 for Collaboration and ICC ranged between 0.28 for Simple reflection to 0.95 for Closed question. Test-retests indicated weighted kappa ranged between 0.27 for Direction to 0.80 for Empathy and ICC ranged between 0.87 for Complex reflection to 0.98 for Closed question. Conclusion: This pilot study indicated that an 8-hour training in MI for voluntary 4th year medical students resulted in significant improvement of MI skills. Larger sample of unselected medical students should be studied to generalize the benefit of MI training to medical students. Interrater reliability and test-retests suggested that coders' training should be intensified.
Resumo:
BACKGROUND: Patient behavior accounts for half or more of the variance in health, disease, mortality and treatment outcome and costs. Counseling using motivational interviewing (MI) effectively improves the substance use and medical compliance behavior of patients. Medical training should include substantial focus on this key issue of health promotion. The objective of the study is to test the efficacy of teaching MI to medical students. METHODS: Thirteen fourth-year medical students volunteered to participate. Seven days before and after an 8-hour interactive MI training workshop, each student performed a video-recorded interview with two standardized patients: a 60 year-old alcohol dependent female consulting a primary care physician for the first time about fatigue and depression symptoms; and a 50 year-old male cigarette smoker hospitalized for myocardial infarction. All 52 videos (13 students×2 interviews before and after training) were independently coded by two blinded clinicians using the Motivational Interviewing Training Integrity (MITI, 3.0). MITI scores consist of global spirit (Evocation, Collaboration, Autonomy/Support), global Empathy and Direction, and behavior count summary scores (% Open questions, Reflection to question ratio, % Complex reflections, % MI-adherent behaviors). A "beginning proficiency" threshold (BPT) is defined for each of these 9 scores. The proportion of students reaching BPT before and after training was compared using McNemar exact tests. Inter-rater reliability was evaluated by comparing double coding, and test-retest analyses were conducted on a sub-sample of 10 consecutive interviews by each coder. Weighted Kappas were used for global rating scales and intra-class correlations (ICC) were computed for behavior count summary scores. RESULTS: The percent of counselors reaching BPT before and after MI training increased significantly for Evocation (15% to 65%, p<.001), Collaboration (27% to 77%, p=.001), Autonomy/Support (15% to 54%, p=.006), and % Open questions (4% to 38%, p=.004). Proportions increased, but were not statistically significant for Empathy (38% to 58%, p=.18), Reflection to question ratio (0% to 15%, p=.12), % Complex reflection (35% to 54%, p=.23), and % MI-adherent behaviors (8% to 15%, p=.69). There was virtually no change for the Direction scale (92% to 88%, p=1.00). The reliability analyses produced mixed results. Weighted kappas for inter-rater reliability ranged from .14 for Direction to .51 for Collaboration, and from .27 for Direction to .80 for Empathy for test-retest. ICCs ranged from .20 for Complex reflections to .89 for Open questions (inter-rater), and from .67 for Complex reflections to .99 for Reflection to question ratio (test-retest). CONCLUSION: This pilot study indicates that a single 8-hour training in motivational interviewing for voluntary fourth-year medical students results in significant improvement of some MI skills. A larger sample of randomly selected medical students observed over longer periods should be studied to test if MI training generalizes to medical students. Inter-rater reliability and test-retest findings indicate a need for caution when interpreting the present results, as well as for more intensive training to help appropriately capture more dimensions of the process in future studies.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.