33 resultados para Nonrandom two-liquid model
em Université de Lausanne, Switzerland
Resumo:
OBJECTIVES: We developed a population model that describes the ocular penetration and pharmacokinetics of penciclovir in human aqueous humour and plasma after oral administration of famciclovir. METHODS: Fifty-three patients undergoing cataract surgery received a single oral dose of 500 mg of famciclovir prior to surgery. Concentrations of penciclovir in both plasma and aqueous humour were measured by HPLC with fluorescence detection. Concentrations in plasma and aqueous humour were fitted using a two-compartment model (NONMEM software). Inter-individual and intra-individual variabilities were quantified and the influence of demographics and physiopathological and environmental variables on penciclovir pharmacokinetics was explored. RESULTS: Drug concentrations were fitted using a two-compartment, open model with first-order transfer rates between plasma and aqueous humour compartments. Among tested covariates, creatinine clearance, co-intake of angiotensin-converting enzyme inhibitors and body weight significantly influenced penciclovir pharmacokinetics. Plasma clearance was 22.8 ± 9.1 L/h and clearance from the aqueous humour was 8.2 × 10(-5) L/h. AUCs were 25.4 ± 10.2 and 6.6 ± 1.8 μg · h/mL in plasma and aqueous humour, respectively, yielding a penetration ratio of 0.28 ± 0.06. Simulated concentrations in the aqueous humour after administration of 500 mg of famciclovir three times daily were in the range of values required for 50% growth inhibition of non-resistant strains of the herpes zoster virus family. CONCLUSIONS: Plasma and aqueous penciclovir concentrations showed significant variability that could only be partially explained by renal function, body weight and comedication. Concentrations in the aqueous humour were much lower than in plasma, suggesting that factors in the blood-aqueous humour barrier might prevent its ocular penetration or that redistribution occurs in other ocular compartments.
Resumo:
Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. L'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Récemment, l?eau liquide a été décrite comme une structure formée d?un réseau aléatoire de liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure à basse température, certaines liaisons hydrogènes sont détruites ce qui est énergétiquement défavorable. Les molécules d?eau s?arrangent alors autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l?eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise les liaisons hydrogènes. Maintenant, la dissolution des particules devient énergétiquement défavorable, et les particules se séparent de l?eau en formant des agrégats qui minimisent leur surface exposée à l?eau. Pourtant, à très haute température, les effets entropiques deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d?eau. En utilisant un modèle basé sur ces changements de structure formée par des liaisons hydrogènes j?ai pu reproduire les phénomènes principaux liés à l?hydrophobicité. J?ai trouvé une région de coexistence de deux phases entre les températures critiques inférieure et supérieure de solubilité, dans laquelle les particules hydrophobes s?agrègent. En dehors de cette région, les particules sont dissoutes dans l?eau. J?ai démontré que l?interaction hydrophobe est décrite par un modèle qui prend uniquement en compte les changements de structure de l?eau liquide en présence d?une particule hydrophobe, plutôt que les interactions directes entre les particules. Encouragée par ces résultats prometteurs, j?ai étudié des solutions aqueuses de particules hydrophobes en présence de co-solvants cosmotropiques et chaotropiques. Ce sont des substances qui stabilisent ou déstabilisent les agrégats de particules hydrophobes. La présence de ces substances peut être incluse dans le modèle en décrivant leur effet sur la structure de l?eau. J?ai pu reproduire la concentration élevée de co-solvants chaotropiques dans le voisinage immédiat de la particule, et l?effet inverse dans le cas de co-solvants cosmotropiques. Ce changement de concentration du co-solvant à proximité de particules hydrophobes est la cause principale de son effet sur la solubilité des particules hydrophobes. J?ai démontré que le modèle adapté prédit correctement les effets implicites des co-solvants sur les interactions de plusieurs corps entre les particules hydrophobes. En outre, j?ai étendu le modèle à la description de particules amphiphiles comme des lipides. J?ai trouvé la formation de différents types de micelles en fonction de la distribution des regions hydrophobes à la surface des particules. L?hydrophobicité reste également un sujet controversé en science des protéines. J?ai défini une nouvelle échelle d?hydrophobicité pour les acides aminés qui forment des protéines, basée sur leurs surfaces exposées à l?eau dans des protéines natives. Cette échelle permet une comparaison meilleure entre les expériences et les résultats théoriques. Ainsi, le modèle développé dans mon travail contribue à mieux comprendre les solutions aqueuses de particules hydrophobes. Je pense que les résultats analytiques et numériques obtenus éclaircissent en partie les processus physiques qui sont à la base de l?interaction hydrophobe.<br/><br/>Despite the importance of water in our daily lives, some of its properties remain unexplained. Indeed, the interactions of water with organic particles are investigated in research groups all over the world, but controversy still surrounds many aspects of their description. In my work I have tried to understand these interactions on a molecular level using both analytical and numerical methods. Recent investigations describe liquid water as random network formed by hydrogen bonds. The insertion of a hydrophobic particle at low temperature breaks some of the hydrogen bonds, which is energetically unfavorable. The water molecules, however, rearrange in a cage-like structure around the solute particle. Even stronger hydrogen bonds are formed between water molecules, and thus the solute particles are soluble. At higher temperatures, this strict ordering is disrupted by thermal movements, and the solution of particles becomes unfavorable. They minimize their exposed surface to water by aggregating. At even higher temperatures, entropy effects become dominant and water and solute particles mix again. Using a model based on these changes in water structure I have reproduced the essential phenomena connected to hydrophobicity. These include an upper and a lower critical solution temperature, which define temperature and density ranges in which aggregation occurs. Outside of this region the solute particles are soluble in water. Because I was able to demonstrate that the simple mixture model contains implicitly many-body interactions between the solute molecules, I feel that the study contributes to an important advance in the qualitative understanding of the hydrophobic effect. I have also studied the aggregation of hydrophobic particles in aqueous solutions in the presence of cosolvents. Here I have demonstrated that the important features of the destabilizing effect of chaotropic cosolvents on hydrophobic aggregates may be described within the same two-state model, with adaptations to focus on the ability of such substances to alter the structure of water. The relevant phenomena include a significant enhancement of the solubility of non-polar solute particles and preferential binding of chaotropic substances to solute molecules. In a similar fashion, I have analyzed the stabilizing effect of kosmotropic cosolvents in these solutions. Including the ability of kosmotropic substances to enhance the structure of liquid water, leads to reduced solubility, larger aggregation regime and the preferential exclusion of the cosolvent from the hydration shell of hydrophobic solute particles. I have further adapted the MLG model to include the solvation of amphiphilic solute particles in water, by allowing different distributions of hydrophobic regions at the molecular surface, I have found aggregation of the amphiphiles, and formation of various types of micelle as a function of the hydrophobicity pattern. I have demonstrated that certain features of micelle formation may be reproduced by the adapted model to describe alterations of water structure near different surface regions of the dissolved amphiphiles. Hydrophobicity remains a controversial quantity also in protein science. Based on the surface exposure of the 20 amino-acids in native proteins I have defined the a new hydrophobicity scale, which may lead to an improvement in the comparison of experimental data with the results from theoretical HP models. Overall, I have shown that the primary features of the hydrophobic interaction in aqueous solutions may be captured within a model which focuses on alterations in water structure around non-polar solute particles. The results obtained within this model may illuminate the processes underlying the hydrophobic interaction.<br/><br/>La vie sur notre planète a commencé dans l'eau et ne pourrait pas exister en son absence : les cellules des animaux et des plantes contiennent jusqu'à 95% d'eau. Malgré son importance dans notre vie de tous les jours, certaines propriétés de l?eau restent inexpliquées. En particulier, l'étude des interactions entre l'eau et les particules organiques occupe des groupes de recherche dans le monde entier et est loin d'être finie. Dans mon travail j'ai essayé de comprendre, au niveau moléculaire, ces interactions importantes pour la vie. J'ai utilisé pour cela un modèle simple de l'eau pour décrire des solutions aqueuses de différentes particules. Bien que l?eau soit généralement un bon solvant, un grand groupe de molécules, appelées molécules hydrophobes (du grecque "hydro"="eau" et "phobia"="peur"), n'est pas facilement soluble dans l'eau. Ces particules hydrophobes essayent d'éviter le contact avec l'eau, et forment donc un agrégat pour minimiser leur surface exposée à l'eau. Cette force entre les particules est appelée interaction hydrophobe, et les mécanismes physiques qui conduisent à ces interactions ne sont pas bien compris à l'heure actuelle. Dans mon étude j'ai décrit l'effet des particules hydrophobes sur l'eau liquide. L'objectif était d'éclaircir le mécanisme de l'interaction hydrophobe qui est fondamentale pour la formation des membranes et le fonctionnement des processus biologiques dans notre corps. Récemment, l'eau liquide a été décrite comme un réseau aléatoire formé par des liaisons hydrogènes. En introduisant une particule hydrophobe dans cette structure, certaines liaisons hydrogènes sont détruites tandis que les molécules d'eau s'arrangent autour de cette particule en formant une cage qui permet de récupérer des liaisons hydrogènes (entre molécules d?eau) encore plus fortes : les particules sont alors solubles dans l'eau. A des températures plus élevées, l?agitation thermique des molécules devient importante et brise la structure de cage autour des particules hydrophobes. Maintenant, la dissolution des particules devient défavorable, et les particules se séparent de l'eau en formant deux phases. A très haute température, les mouvements thermiques dans le système deviennent tellement forts que les particules se mélangent de nouveau avec les molécules d'eau. A l'aide d'un modèle qui décrit le système en termes de restructuration dans l'eau liquide, j'ai réussi à reproduire les phénomènes physiques liés à l?hydrophobicité. J'ai démontré que les interactions hydrophobes entre plusieurs particules peuvent être exprimées dans un modèle qui prend uniquement en compte les liaisons hydrogènes entre les molécules d'eau. Encouragée par ces résultats prometteurs, j'ai inclus dans mon modèle des substances fréquemment utilisées pour stabiliser ou déstabiliser des solutions aqueuses de particules hydrophobes. J'ai réussi à reproduire les effets dûs à la présence de ces substances. De plus, j'ai pu décrire la formation de micelles par des particules amphiphiles comme des lipides dont la surface est partiellement hydrophobe et partiellement hydrophile ("hydro-phile"="aime l'eau"), ainsi que le repliement des protéines dû à l'hydrophobicité, qui garantit le fonctionnement correct des processus biologiques de notre corps. Dans mes études futures je poursuivrai l'étude des solutions aqueuses de différentes particules en utilisant les techniques acquises pendant mon travail de thèse, et en essayant de comprendre les propriétés physiques du liquide le plus important pour notre vie : l'eau.
Resumo:
Over the last decade, there has been a significant increase in the number of high-magnetic-field MRI magnets. However, the exact effect of a high magnetic field strength (B0 ) on diffusion-weighted MR signals is not yet fully understood. The goal of this study was to investigate the influence of different high magnetic field strengths (9.4 T and 14.1 T) and diffusion times (9, 11, 13, 15, 17 and 24 ms) on the diffusion-weighted signal in rat brain white matter. At a short diffusion time (9 ms), fractional anisotropy values were found to be lower at 14.1 T than at 9.4 T, but this difference disappeared at longer diffusion times. A simple two-pool model was used to explain these findings. The model describes the white matter as a first hindered compartment (often associated with the extra-axonal space), characterized by a faster orthogonal diffusion and a lower fractional anisotropy, and a second restricted compartment (often associated with the intra-axonal space), characterized by a slower orthogonal diffusion (i.e. orthogonal to the axon direction) and a higher fractional anisotropy. Apparent T2 relaxation time measurements of the hindered and restricted pools were performed. The shortening of the pseudo-T2 value from the restricted compartment with B0 is likely to be more pronounced than the apparent T2 changes in the hindered compartment. This study suggests that the observed differences in diffusion tensor imaging parameters between the two magnetic field strengths at short diffusion time may be related to differences in the apparent T2 values between the pools. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.
Resumo:
Objectives: Gentamicin is among the most commonly prescribed antibiotics in newborns, but large interindividual variability in exposure levels exists. Based on a population pharmacokinetic analysis of a cohort of unselected neonates, we aimed to validate current dosing recommendations from a recent reference guideline (Neofax®). Methods: From 3039 concentrations collected in 994 preterm (median gestational age 32.3 weeks, range 24.2-36.5) and 455 term newborns, treated at the University Hospital of Lausanne between 2006 and 2011, a population pharmacokinetic analysis was performed with NONMEM®. Model-based simulations were used to assess the ability of dosing regimens to bring concentrations into targets: trough ≤ 1mg/L and peak ~ 8mg/L. Results: A two-compartment model best characterized gentamicin pharmacokinetics. Model parameters are presented in the table. Body weight, gestational age and postnatal age positively influence clearance, which decreases under dopamine administration. Body weight and gestational age influence the distribution volume. Model based simulations confirm that preterm infants need doses superior to 4 mg/kg, and extended dosage intervals, up to 48 hours for very preterm newborns, whereas most term newborns would achieve adequate exposure under 4 mg/kg q. 24 h. More than 90% of neonates would achieve trough concentrations below 2 mg/L and peaks above 6 mg/L following most recent guidelines. Conclusions: Simulated gentamicin exposure demonstrates good accordance with recent dosing recommendations for target concentration achievement.
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.
Resumo:
Objectives: Gentamicin is one of the most commonly prescribed antibiotics for suspected or proven infection in newborns. Because of age-associated (pre- and post- natal) changes in body composition and organ function, large interindividual variability in gentamicin drug levels exists, thus requiring a close monitoring of this drug due to its narrow therapeutic index. We aimed to investigate clinical and demographic factors influencing gentamicin pharmacokinetics (PK) in a large cohort of unselected newborns and to explore optimal regimen based on simulation. Methods: All gentamicin concentration data from newborns treated at the University Hospital Center of Lausanne between December 2006 and October 2011 were retrieved. Gentamicin concentrations were measured within the frame of a routine therapeutic drug monitoring program, in which 2 concentrations (at 1h and 12h) are systematically collected after the first administered dose, and a few additional concentrations are sampled along the treatment course. A population PK analysis was performed by comparing various structural models, and the effect of clinical and demographic factors on gentamicin disposition was explored using NONMEM®. Results: A total of 3039 concentrations collected in 994 preterm (median gestational age 32.3 weeks, range 24.2-36.5 weeks) and 455 term newborns were used in the analysis. Most of the data (86%) were sampled after the first dose (C1 h and C12 h). A two-compartment model best characterized gentamicin PK. Average clearance (CL) was 0.044 L/h/kg (CV 25%), central volume of distribution (Vc) 0.442 L/kg (CV 18%), intercompartmental clearance (Q) 0.040 L/h/kg and peripheral volume of distribution (Vp) 0.122 L/kg. Body weight, gestational age and postnatal age positively influenced CL. The use of both gestational age and postnatal age better predicted CL than postmenstrual age alone. CL was affected by dopamine and furosemide administration and non-significantly by indometacin. Body weight, gestational age and dopamine coadminstration significantly influenced Vc. Model based simulation confirms that preterm infants need higher dose, superior to 4 mg/kg, and extended interval dosage regimen to achieve adequate concentration. Conclusions: This study, performed on a very large cohort of neonates, identified important factors influencing gentamicin PK. The model will serve to elaborate a Bayesian tool for dosage individualization based on a single measurement.
Resumo:
AIM: This study aims to investigate the clinical and demographic factors influencing gentamicin pharmacokinetics in a large cohort of unselected premature and term newborns and to evaluate optimal regimens in this population. METHODS: All gentamicin concentration data, along with clinical and demographic characteristics, were retrieved from medical charts in a Neonatal Intensive Care Unit over 5 years within the frame of a routine therapeutic drug monitoring programme. Data were described using non-linear mixed-effects regression analysis ( nonmem®). RESULTS: A total of 3039 gentamicin concentrations collected in 994 preterm and 455 term newborns were included in the analysis. A two compartment model best characterized gentamicin disposition. The average parameter estimates, for a median body weight of 2170 g, were clearance (CL) 0.089 l h(-1) (CV 28%), central volume of distribution (Vc ) 0.908 l (CV 18%), intercompartmental clearance (Q) 0.157 l h(-1) and peripheral volume of distribution (Vp ) 0.560 l. Body weight, gestational age and post-natal age positively influenced CL. Dopamine co-administration had a significant negative effect on CL, whereas the influence of indomethacin and furosemide was not significant. Both body weight and gestational age significantly influenced Vc . Model-based simulations confirmed that, compared with term neonates, preterm infants need higher doses, superior to 4 mg kg(-1) , at extended intervals to achieve adequate concentrations. CONCLUSIONS: This observational study conducted in a large cohort of newborns confirms the importance of body weight and gestational age for dosage adjustment. The model will serve to set up dosing recommendations and elaborate a Bayesian tool for dosage individualization based on concentration monitoring.
Resumo:
Valganciclovir (VGC) is an oral prodrug of ganciclovir (GCV) recently introduced for prophylaxis and treatment of cytomegalovirus infection. Optimal concentration exposure for effective and safe VGC therapy would require either reproducible VGC absorption and GCV disposition or dosage adjustment based on therapeutic drug monitoring (TDM). We examined GCV population pharmacokinetics in solid organ transplant recipients receiving oral VGC, including the influence of clinical factors, the magnitude of variability, and its impact on efficacy and tolerability. Nonlinear mixed effect model (NONMEM) analysis was performed on plasma samples from 65 transplant recipients under VGC prophylaxis or treatment. A two-compartment model with first-order absorption appropriately described the data. Systemic clearance was markedly influenced by the glomerular filtration rate (GFR), patient gender, and graft type (clearance/GFR = 1.7 in kidney, 0.9 in heart, and 1.2 in lung and liver recipients) with interpatient and interoccasion variabilities of 26 and 12%, respectively. Body weight and sex influenced central volume of distribution (V(1) = 0.34 liter/kg in males and 0.27 liter/kg in females [20% interpatient variability]). No significant drug interaction was detected. The good prophylactic efficacy and tolerability of VGC precluded the demonstration of any relationship with GCV concentrations. In conclusion, this analysis highlights the importance of thorough adjustment of VGC dosage to renal function and body weight. Considering the good predictability and reproducibility of the GCV profile after treatment with oral VGC, routine TDM does not appear to be clinically indicated in solid-organ transplant recipients. However, GCV plasma measurement may still be helpful in specific clinical situations.
Resumo:
Inorganic phosphate (Pi) is one of the main nutrients limiting plant growth anddevelopment in many agro-ecosystems. In plants, phosphate is acquired from the soil by theroots, and is then transferred to the shoot via the xylem. In the model plant Arabidopsisthaliana, PHO1 was previously identified as being involved in loading Pi into the xylem ofroots. AtPHO1, belongs to a multigenic family composed of 10 additional members, namelyAtPHO1;H1 to AtPHO1;10. In this study, we aimed at further investigating the role of thePHO1 gene family in Pi homeostasis in plants, and to this end we isolated and characterizedthe PHO1 members of two main model plants, the moss Physcomitrella patens and the riceOryza sativa.In the bryophyte P. patens, bioinformatic analyses revealed the presence of seven AtPHO1homologues, highly similar to AtPHO1. The seven moss PHO1 genes, namely PpPHO1;1 toPpPHO1;7 appeared to be differentially regulated, both at the tissue level and in response toPi status. However only PpPHO1;1 and PpPHO1;7 were specifically up-regulated upon Pistarvation, suggesting a potential role in Pi homeostasis. We also characterized the responseof P. patens to Pi starvation, showing that higher and lower plants share some commonstrategies to adapt to Pi-deficiency.In the second part, focusing on the monocotyledon rice, we showed the existence of threePHO1 homologues OsPHO1;1 to OsPHO1;3, with the unique particularity of each havingNatural Antisense Transcripts (NATs). Molecular analyses revealed that both the sense andthe antisense OsPHO1;2 transcripts were by far the most abundantly expressed transcripts ofthe family, preferentially expressed in the roots. The stable expression of OsPHO1;2 in allconditions tested, in opposition with the highly induced antisense transcript upon Pistarvation, suggest a putative role for the antisense in regulating the sense transcript.Moreover, mutant analyses revealed that OsPHO1;2 plays a key role in Pi homeostasis, intransferring Pi from the root to the shoot. Finally, complementing the pho1 mutant inArabidopsis, characterized by low Pi in the shoot and reduced growth, with the riceOsPHO1;2 gene revealed a new role for PHO1 in Pi signaling. Indeed, the complementedplants showed normal growth, with however low Pi content.
Resumo:
The T-cell receptor (TCR) interaction with antigenic peptides (p) presented by the major histocompatibility complex (MHC) molecule is a key determinant of immune response. In addition, TCR-pMHC interactions offer examples of features more generally pertaining to protein-protein recognition: subtle specificity and cross-reactivity. Despite their importance, molecular details determining the TCR-pMHC binding remain unsolved. However, molecular simulation provides the opportunity to investigate some of these aspects. In this study, we perform extensive equilibrium and steered molecular dynamics simulations to study the unbinding of three TCR-pMHC complexes. As a function of the dissociation reaction coordinate, we are able to obtain converged H-bond counts and energy decompositions at different levels of detail, ranging from the full proteins, to separate residues and water molecules, down to single atoms at the interface. Many observed features do not support a previously proposed two-step model for TCR recognition. Our results also provide keys to interpret experimental point-mutation results. We highlight the role of water both in terms of interface resolvation and of water molecules trapped in the bound complex. Importantly, we illustrate how two TCRs with similar reactivity and structures can have essentially different binding strategies. Proteins 2011; © 2011 Wiley-Liss, Inc.
Resumo:
SUMMARY Under stressful conditions, mutant or post-translationally modified proteins may spontaneously misfold and form toxie species, which may further assemble into a continuum of increasingly large and insoluble toxic oligomers that may further condense into less toxic, compact amyloids in the cell Intracellular accumulation of aggregated proteins is a common denominator of several neurodegenerative diseases. To cope with the cytotoxicity induced by abnormal, aggregated proteins, cells have evolved various defence mechanisms among which, the molecular chaperones Hsp70. Hsp70 (DnaK in E. coii) is an ATPase chaperone involved in many physiological processes in the cell, such as assisting de novo protein folding, dissociating native protein oligomers and serving as pulling motors in the import of polypeptides into organelles. In addition, Hsp70 chaperones can actively solubilize and reactivate stable protein aggregates, such as heat- or mutation-induced aggregates. Hsp70 requires the cooperation of two other co-chaperones: Hsp40 and NEF (Nucleotide exchange factor) to fulfil its unfolding activity. In the first experimental section of this thesis (Chapter II), we studied by biochemical analysis the in vitro interaction between recombinant human aggregated α-synuclein (a-Syn oligomers) mimicking toxic a-Syn oligomers species in PD brains, with a model Hsp70/Hsp40 chaperone system (the E. coii DnaK/DnaJ/GrpE). We found that chaperone-mediated unfolding of two denatured model enzymes were strongly affected by α-Syn oligomers but, remarkably, not by monomers. This in vitro observed dysfunction of the Hsp70 chaperone system resulted from the sequestration of the Hsp40 proteins by the oligomeric α-synuclein species. In the second experimental part (Chapter III), we performed in vitro biochemical analysis of the co-chaperone function of three E. coii Hsp40s proteins (DnaJ, CbpA and DjlA) in the ATP-fuelled DnaK-mediated refolding of a model DnaK chaperone substrate into its native state. Hsp40s activities were compared using dose-response approaches in two types of in vitro assays: refolding of heat-denatured G6PDH and DnaK-mediated ATPase activity. We also observed that the disaggregation efficiency of Hsp70 does not directly correlate with Hsp40 binding affinity. Besides, we found that these E. coii Hsp40s confer substrate specificity to DnaK, CbpA being more effective in the DnaK-mediated disaggregation of large G6PDH aggregates than DnaJ under certain conditions. Sensibilisées par différents stress ou mutations, certaines protéines fonctionnelles de la cellule peuvent spontanément se convertir en formes inactives, mal pliées, enrichies en feuillets bêta, et exposant des surfaces hydrophobes favorisant l'agrégation. Cherchant à se stabiliser, les surfaces hydrophobes peuvent s'associer aux régions hydrophobes d'autres protéines mal pliées, formant des agrégats protéiques stables: les amyloïdes. Le dépôt intracellulaire de protéines agrégées est un dénominateur commun à de nombreuses maladies neurodégénératives. Afin de contrer la cytotoxicité induite par les protéines agrégées, les cellules ont développé plusieurs mécanismes de défense, parmi lesquels, les chaperonnes moléculaires Hsp70. Hsp70 nécessite la collaboration de deux autres co-chaperonnes : Hsp40 et NEF pour accomplir son activité de désagrégation. Hsp70 (DnaK, chez E. coli) est impliquée par ailleurs dans d'autres fonctions physiologiques telles que l'assistanat de protéines néosynthétisées à la sortie du ribosome, ou le transport transmembranaire de polypeptides. Par ailleurs, les chaperonnes Hsp70 peuvent également solubiliser et réactiver des protéines agrégées à la suite d'un stress ou d'une mutation. Dans la première partie expérimentale de cette thèse (Chapter II), nous avons étudié in vitro l'interaction entre les oligomères d'a-synucleine, responsables entre autres, de la maladie de Parkinson, et le système chaperon Hsp70/Hsp40 (système Escherichia coli DnaK/DnaJ/GrpE). Nous avons démontré que contrairement aux monomères, les oligomères d'a-synucleine inhibaient le système chaperon lors du repliement de protéines agrégées. Cette dysfonction du système chaperon résulte de la séquestration des chaperonnes Hsp40 par les oligomères d'a-synucleine. La deuxième partie expérimentale (Chapitre III) est consacrée à une étude in vitro de la fonction co-chaperonne de trois Hsp40 d'is. coli (DnaJ, CbpA, et DjlA) lors de la désagrégation par DnaK d'une protéine pré-agrégée. Leurs activités ont été comparées par le biais d'une approche dose-réponse au niveau de deux analyses enzymatiques: le repliement de la protéine agrégée et l'activité ATPase de DnaK. Par ailleurs, nous avons mis en évidence que l'efficacité de désagrégation d'Hsp70 et l'affinité des chaperonnes Hsp40 vis-à-vis de leur substrat n'étaient pas corrélées positivement. Nous avons également montré que ces trois chaperonnes Hsp40 étaient directement impliquées dans la spécificité des fonctions accomplies par les chaperonnes Hsp70. En effet, DnaK en présence de CbpA assure la désagrégation de large agrégats protéiques avec une efficacité nettement plus accrue qu'en présence de DnaJ.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.