943 resultados para Dose-Response Relationship, Immunologic
Resumo:
The aim of the present work was to study the deltamethrin susceptibility of eggs from Triatoma infestans populations and the contribution of pyrethroid esterases to deltamethrin degradation. Insects were collected from sylvatic areas, including Veinte de Octubre and Kirus-Mayu (Bolivia) and from domiciliary areas, including El Palmar (Bolivia) and La Pista (Argentina). Deltamethrin susceptibility was determined by dose-response bioassays. Serial dilutions of deltamethrin (0.0005-1 mg/mL) were topically applied to 12-day-old eggs. Samples from El Palmar had the highest lethal dose ratio (LDR) value (44.90) compared to the susceptible reference strain (NFS), whereas the Veinte de Octubre samples had the lowest value (0.50). Pyrethroid esterases were evaluated using 7-coumaryl permethrate (7-CP) on individually homogenised eggs from each population and from NFS. The El Palmar and La Pista samples contained 40.11 and 36.64 pmol/min/mg protein, respectively, and these values were statistically similar to NFS (34.92 pmol/min/mg protein) and different from Kirus-Mayu and Veinte de Octubre (27.49 and 22.69 pmol/min/mg protein, respectively). The toxicological data indicate that the domestic populations were resistant to deltamethrin, but no statistical contribution of 7-CP esterases was observed. The sylvatic populations had similar LDR values to NFS, but lower 7-CP esterase activities. Moreover, this is the first study of the pyrethroid esterases on T. infestans eggs employing a specific substrate (7-CP).
Resumo:
Background: Current guidelines underline the limitations of existing instruments to assess fitness to drive and the poor adaptability of batteries of neuropsychological tests in primary care settings. Aims: To provide a free, reliable, transparent computer based instrument capable of detecting effects of age or drugs on visual processing and cognitive functions. Methods: Relying on systematic reviews of neuropsychological tests and driving performances, we conceived four new computed tasks measuring: visual processing (Task1), movement attention shift (Task2), executive response, alerting and orientation gain (Task3), and spatial memory (Task4). We then planned five studies to test MedDrive's reliability and validity. Study-1 defined instructions and learning functions collecting data from 105 senior drivers attending an automobile club course. Study-2 assessed concurrent validity for detecting minor cognitive impairment (MCI) against useful field of view (UFOV) on 120 new senior drivers. Study-3 collected data from 200 healthy drivers aged 20-90 to model age related normal cognitive decline. Study-4 measured MedDrive's reliability having 21 healthy volunteers repeat tests five times. Study-5 tested MedDrive's responsiveness to alcohol in a randomised, double-blinded, placebo, crossover, dose-response validation trial including 20 young healthy volunteers. Results: Instructions were well understood and accepted by all senior drivers. Measures of visual processing (Task1) showed better performances than the UFOV in detecting MCI (ROC 0.770 vs. 0.620; p=0.048). MedDrive was capable of explaining 43.4% of changes occurring with natural cognitive decline. In young healthy drivers, learning effects became negligible from the third session onwards for all tasks except for dual tasking (ICC=0.769). All measures except alerting and orientation gain were affected by blood alcohol concentrations. Finally, MedDrive was able to explain 29.3% of potential causes of swerving on the driving simulator. Discussion and conclusions: MedDrive reveals improved performances compared to existing computed neuropsychological tasks. It shows promising results both for clinical and research purposes.
Resumo:
ABSTRACT: BACKGROUND: EMD 521873 (Selectikine or NHS-IL2LT) is a fusion protein consisting of modified human IL-2 which binds specifically to the high-affinity IL-2 receptor, and an antibody specific for both single- and double-stranded DNA, designed to facilitate the enrichment of IL-2 in tumor tissue. METHODS: An extensive analysis of pharmacodynamic (PD) markers associated with target modulation was assessed during a first-in-human phase I dose-escalation trial of Selectikine. RESULTS: Thirty-nine patients with metastatic or locally advanced tumors refractory to standard treatments were treated with increasing doses of Selectikine, and nine further patients received additional cyclophosphamide. PD analysis, assessed during the first two treatment cycles, revealed strong activation of both CD4+ and CD8+ T-cells and only weak NK cell activation. No dose response was observed. As expected, Treg cells responded actively to Selectikine but remained at lower frequency than effector CD4+ T-cells. Interestingly, patient survival correlated positively with both high lymphocyte counts and low levels of activated CD8+ T-cells at baseline, the latter of which was associated with enhanced T-cell responses to the treatment. CONCLUSIONS: The results confirm the selectivity of Selectikine with predominant T-cell and low NK cell activation, supporting follow-up studies assessing the clinical efficacy of Selectikine for cancer patients.
Resumo:
Neuropeptide Y (NPY) is a potent inhibitor of neurotransmitter release through the Y2 receptor subtype. Specific antagonists for the Y2 receptors have not yet been described. Based on the concept of template-assembled synthetic proteins we have used a cyclic template molecule containing two beta-turn mimetics for covalent attachment of four COOH-terminal fragments RQRYNH2 (NPY 33-36), termed T4-[NPY(33-36)]4. This structurally defined template-assembled synthetic protein has been tested for binding using SK-N-MC and LN319 cell lines that express the Y1 and Y2 receptor, respectively. T4-[NPY(33-36)]4 binds to the Y2 receptor with high affinity (IC50 = 67.2 nM) and has poor binding to the Y1 receptor. This peptidomimetic tested on LN319 cells at concentrations up to 10 microM shows no inhibitory effect on forskolin-stimulated cAMP levels (IC50 for NPY = 2.5 nM). Furthermore, we used confocal microscopy to examine the NPY-induced increase in intracellular calcium in single LN319 cells. Preincubation of the cells with T4-[NPY(33-36)]4 shifted to the right the dose-response curves for intracellular mobilization of calcium induced by NPY at concentrations ranging from 0.1 nM to 10 microM. Finally, we assessed the competitive antagonistic properties of T4-[NPY(33-36)]4 at presynaptic peptidergic Y2 receptors modulating noradrenaline release. the compound T4-[NPY(33-36)]4 caused a marked shift to the right of the concentration-response curve of NPY 13-36, a Y2-selective fragment, yielding a pA2 value of 8.48. Thus, to our best knowledge, T4-[NPY(33-36)]4 represents the first potent and selective Y2 antagonist.
Resumo:
Glucagon-like peptide-1 (GLP-1) stimulates glucose-induced insulin secretion by binding to a specific G protein-coupled receptor linked to activation of the adenylyl cyclase pathway. Here, using insulinoma cell lines, we studied homologous and heterologous desensitization of GLP-1-induced cAMP production. Preexposure of the cells to GLP-1 induced a decrease in GLP-1-mediated cAMP production, as assessed by a 3- to 5-fold rightward shift of the dose-response curve and an approximately 20 percent decrease in the maximal production of cAMP. Activation of protein kinase C by the phorbol ester phorbol 12-myristate 13-acetate (PMA) also induced desensitization of the GLP-1-mediated response, leading to a 6- to 9-fold shift in the EC50 and a 30% decrease in the maximal production of cAMP. Both forms of desensitization were additive, and the protein kinase C inhibitor RO-318220 inhibited PMA-induced desensitization, but not agonist-induced desensitization. GLP-1- and PMA-dependent desensitization correlated with receptor phosphorylation, and the levels of phosphorylation induced by the two agents were additive. Furthermore, PMA-induced, but not GLP-1-induced, phosphorylation was totally inhibited by RO-318220. Internalization of the GLP-1 receptor did not participate in the desensitization induced by PMA, as a mutant GLP-1 receptor lacking the last 20 amino acids of the cytoplasmic tail was found to be totally resistant to the internalization process, but was still desensitized after PMA preexposure. PMA and GLP-1 were not able to induce the phosphorylation of a receptor deletion mutant lacking the last 33 amino acids of the cytoplasmic tail, indicating that the phosphorylation sites were located within the deleted region. The cAMP production mediated by this deletion mutant was not desensitized by PMA and was only poorly desensitized by GLP-1. Together, our results indicate that the production of cAMP and, hence, the stimulation of insulin secretion induced by GLP-1 can be negatively modulated by homologous and heterologous desensitization, mechanisms that involve receptor phosphorylation.
Resumo:
When facing age-related cerebral decline, older adults are unequally affected by cognitive impairment without us knowing why. To explore underlying mechanisms and find possible solutions to maintain life-space mobility, there is a need for a standardized behavioral test that relates to behaviors in natural environments. The aim of the project described in this paper was therefore to provide a free, reliable, transparent, computer-based instrument capable of detecting age-related changes on visual processing and cortical functions for the purposes of research into human behavior in computational transportation science. After obtaining content validity, exploring psychometric properties of the developed tasks, we derived (Study 1) the scoring method for measuring cerebral decline on 106 older drivers aged ≥70 years attending a driving refresher course organized by the Swiss Automobile Association to test the instrument's validity against on-road driving performance (106 older drivers). We then validated the derived method on a new sample of 182 drivers (Study 2). We then measured the instrument's reliability having 17 healthy, young volunteers repeat all tests included in the instrument five times (Study 3) and explored the instrument's psychophysical underlying functions on 47 older drivers (Study 4). Finally, we tested the instrument's responsiveness to alcohol and effects on performance on a driving simulator in a randomized, double-blinded, placebo, crossover, dose-response, validation trial including 20 healthy, young volunteers (Study 5). The developed instrument revealed good psychometric properties related to processing speed. It was reliable (ICC = 0.853) and showed reasonable association to driving performance (R (2) = 0.053), and responded to blood alcohol concentrations of 0.5 g/L (p = 0.008). Our results suggest that MedDrive is capable of detecting age-related changes that affect processing speed. These changes nevertheless do not necessarily affect driving behavior.
Resumo:
Background: Imatinib has revolutionized the treatment of chronic myeloid leukemia (CML) and gastrointestinal stromal tumors (GIST). Considering the large inter-individual differences in the function of the systems involved in its disposition, exposure to imatinib can be expected to vary widely among patients. This observational study aimed at describing imatinib pharmacokinetic variability and its relationship with various biological covariates, especially plasma alpha1-acid glycoprotein (AGP), and at exploring the concentration-response relationship in patients. Methods: A population pharmacokinetic model (NONMEM) including 321 plasma samples from 59 patients was built up and used to derive individual post-hoc Bayesian estimates of drug exposure (AUC; area under curve). Associations between AUC and therapeutic response or tolerability were explored by ordered logistic regression. Influence of the target genotype (i.e. KIT mutation profile) on response was also assessed in GIST patients. Results: A one-compartment model with first-order absorption appropriately described the data, with an average oral clearance of 14.3 L/h (CL) and volume of distribution of 347 L (Vd). A large inter-individual variability remained unexplained, both on CL (36%) and Vd (63%), but AGP levels proved to have a marked impact on total imatinib disposition. Moreover, both total and free AUC correlated with the occurrence and number of side effects (e.g. OR 2.9±0.6 for a 2-fold free AUC increase; p<0.001). Furthermore, in GIST patients, higher free AUC predicted a higher probability of therapeutic response (OR 1.9±0.5; p<0.05), notably in patients with tumor harboring an exon 9 mutation or wild-type KIT, known to decrease tumor sensitivity towards imatinib. Conclusion: The large pharmacokinetic variability, associated to the pharmacokinetic-pharmacodynamic relationship uncovered are arguments to further investigate the usefulness of individualizing imatinib prescription based on TDM. For this type of drug, it should ideally take into consideration either circulating AGP concentrations or free drug levels, as well as KIT genotype for GIST.
Resumo:
Abstract We introduce a label-free technology based on digital holographic microscopy (DHM) with applicability for screening by imaging, and we demonstrate its capability for cytotoxicity assessment using mammalian living cells. For this first high content screening compatible application, we automatized a digital holographic microscope for image acquisition of cells using commercially available 96-well plates. Data generated through both label-free DHM imaging and fluorescence-based methods were in good agreement for cell viability identification and a Z'-factor close to 0.9 was determined, validating the robustness of DHM assay for phenotypic screening. Further, an excellent correlation was obtained between experimental cytotoxicity dose-response curves and known IC values for different toxic compounds. For comparable results, DHM has the major advantages of being label free and close to an order of magnitude faster than automated standard fluorescence microscopy.
Resumo:
Calbindin D-28k is a calcium-binding protein which is not expressed by dorsal root ganglion cells cultured from 6-day-old (E6) chick embryos. When soluble muscle extracts from embryos at E11, E18 or chickens 2 weeks after hatching were added immediately after seeding, dorsal root ganglia cells grown at E6 displayed neuronal subpopulations expressing calbindin immunoreactivity with time; the effect of muscle extract on the percentage of calbindin-immunoreactive dorsal root ganglia cells followed a dose-response curve. When muscle extract was added to cultures after a 3 day delay, the percentage of calbindin-expressing neurons was unchanged. The effect produced by muscle extract and, to a lesser degree, skin extract on the appearance of calbindin-positive neurons was not reproduced by brain or liver extracts while all four exerted a trophic action on cultured neurons. Hence it is assumed that muscle extract contains a factor which produces an inductive effect on the initiation of calbindin-expression by uncommitted subpopulations of sensory neurons rather than a trophic influence on the selective survival of covertly committed neuronal subpopulations. The fact that muscle extract promoted calbindin expression by dorsal root ganglia cells in neuron-enriched as well as in mixed dorsal root ganglion cell cultures indicates that the factor would act directly on sensory neurons rather than indirectly through mediation of non-neuronal cells. Since the active muscular factor was non-dialysable, heat-inactivated, trypsin-sensitive and retained by molecular filters with a cut-off of 30 K, this factor is probably a protein.
Resumo:
Background It has been hypothesized that children and adolescents might be more vulnerable to possible health effects from mobile phone exposure than adults. We investigated whether mobile phone use is associated with brain tumor risk among children and adolescents. Methods CEFALO is a multicenter case-control study conducted in Denmark, Sweden, Norway, and Switzerland that includes all children and adolescents aged 7-19 years who were diagnosed with a brain tumor between 2004 and 2008. We conducted interviews, in person, with 352 case patients (participation rate: 83%) and 646 control subjects (participation rate: 71%) and their parents. Control subjects were randomly selected from population registries and matched by age, sex, and geographical region. We asked about mobile phone use and included mobile phone operator records when available. Odds ratios (ORs) for brain tumor risk and 95% confidence intervals (CIs) were calculated using conditional logistic regression models. Results Regular users of mobile phones were not statistically significantly more likely to have been diagnosed with brain tumors compared with nonusers (OR = 1.36; 95% CI = 0.92 to 2.02). Children who started to use mobile phones at least 5 years ago were not at increased risk compared with those who had never regularly used mobile phones (OR = 1.26, 95% CI = 0.70 to 2.28). In a subset of study participants for whom operator recorded data were available, brain tumor risk was related to the time elapsed since the mobile phone subscription was started but not to amount of use. No increased risk of brain tumors was observed for brain areas receiving the highest amount of exposure. Conclusion The absence of an exposure-response relationship either in terms of the amount of mobile phone use or by localization of the brain tumor argues against a causal association.
Resumo:
SUMMARY Under stressful conditions, mutant or post-translationally modified proteins may spontaneously misfold and form toxie species, which may further assemble into a continuum of increasingly large and insoluble toxic oligomers that may further condense into less toxic, compact amyloids in the cell Intracellular accumulation of aggregated proteins is a common denominator of several neurodegenerative diseases. To cope with the cytotoxicity induced by abnormal, aggregated proteins, cells have evolved various defence mechanisms among which, the molecular chaperones Hsp70. Hsp70 (DnaK in E. coii) is an ATPase chaperone involved in many physiological processes in the cell, such as assisting de novo protein folding, dissociating native protein oligomers and serving as pulling motors in the import of polypeptides into organelles. In addition, Hsp70 chaperones can actively solubilize and reactivate stable protein aggregates, such as heat- or mutation-induced aggregates. Hsp70 requires the cooperation of two other co-chaperones: Hsp40 and NEF (Nucleotide exchange factor) to fulfil its unfolding activity. In the first experimental section of this thesis (Chapter II), we studied by biochemical analysis the in vitro interaction between recombinant human aggregated α-synuclein (a-Syn oligomers) mimicking toxic a-Syn oligomers species in PD brains, with a model Hsp70/Hsp40 chaperone system (the E. coii DnaK/DnaJ/GrpE). We found that chaperone-mediated unfolding of two denatured model enzymes were strongly affected by α-Syn oligomers but, remarkably, not by monomers. This in vitro observed dysfunction of the Hsp70 chaperone system resulted from the sequestration of the Hsp40 proteins by the oligomeric α-synuclein species. In the second experimental part (Chapter III), we performed in vitro biochemical analysis of the co-chaperone function of three E. coii Hsp40s proteins (DnaJ, CbpA and DjlA) in the ATP-fuelled DnaK-mediated refolding of a model DnaK chaperone substrate into its native state. Hsp40s activities were compared using dose-response approaches in two types of in vitro assays: refolding of heat-denatured G6PDH and DnaK-mediated ATPase activity. We also observed that the disaggregation efficiency of Hsp70 does not directly correlate with Hsp40 binding affinity. Besides, we found that these E. coii Hsp40s confer substrate specificity to DnaK, CbpA being more effective in the DnaK-mediated disaggregation of large G6PDH aggregates than DnaJ under certain conditions. Sensibilisées par différents stress ou mutations, certaines protéines fonctionnelles de la cellule peuvent spontanément se convertir en formes inactives, mal pliées, enrichies en feuillets bêta, et exposant des surfaces hydrophobes favorisant l'agrégation. Cherchant à se stabiliser, les surfaces hydrophobes peuvent s'associer aux régions hydrophobes d'autres protéines mal pliées, formant des agrégats protéiques stables: les amyloïdes. Le dépôt intracellulaire de protéines agrégées est un dénominateur commun à de nombreuses maladies neurodégénératives. Afin de contrer la cytotoxicité induite par les protéines agrégées, les cellules ont développé plusieurs mécanismes de défense, parmi lesquels, les chaperonnes moléculaires Hsp70. Hsp70 nécessite la collaboration de deux autres co-chaperonnes : Hsp40 et NEF pour accomplir son activité de désagrégation. Hsp70 (DnaK, chez E. coli) est impliquée par ailleurs dans d'autres fonctions physiologiques telles que l'assistanat de protéines néosynthétisées à la sortie du ribosome, ou le transport transmembranaire de polypeptides. Par ailleurs, les chaperonnes Hsp70 peuvent également solubiliser et réactiver des protéines agrégées à la suite d'un stress ou d'une mutation. Dans la première partie expérimentale de cette thèse (Chapter II), nous avons étudié in vitro l'interaction entre les oligomères d'a-synucleine, responsables entre autres, de la maladie de Parkinson, et le système chaperon Hsp70/Hsp40 (système Escherichia coli DnaK/DnaJ/GrpE). Nous avons démontré que contrairement aux monomères, les oligomères d'a-synucleine inhibaient le système chaperon lors du repliement de protéines agrégées. Cette dysfonction du système chaperon résulte de la séquestration des chaperonnes Hsp40 par les oligomères d'a-synucleine. La deuxième partie expérimentale (Chapitre III) est consacrée à une étude in vitro de la fonction co-chaperonne de trois Hsp40 d'is. coli (DnaJ, CbpA, et DjlA) lors de la désagrégation par DnaK d'une protéine pré-agrégée. Leurs activités ont été comparées par le biais d'une approche dose-réponse au niveau de deux analyses enzymatiques: le repliement de la protéine agrégée et l'activité ATPase de DnaK. Par ailleurs, nous avons mis en évidence que l'efficacité de désagrégation d'Hsp70 et l'affinité des chaperonnes Hsp40 vis-à-vis de leur substrat n'étaient pas corrélées positivement. Nous avons également montré que ces trois chaperonnes Hsp40 étaient directement impliquées dans la spécificité des fonctions accomplies par les chaperonnes Hsp70. En effet, DnaK en présence de CbpA assure la désagrégation de large agrégats protéiques avec une efficacité nettement plus accrue qu'en présence de DnaJ.
Resumo:
Peats are an important reserve of humified carbon in terrestrial ecosystems. The interest in the use of humic substances as plant growth promoters is continuously increasing. The objective of this study was to evaluate the bioactivity of alkaline soluble humic substances (HS), humic (HA) and fulvic acids (FA) isolated from peats with different decomposition stages of organic matter (sapric, fibric and hemic) in the Serra do Espinhaço Meridional, state of Minas Gerais. Dose-response curves were established for the number of lateral roots growing from the main plant axis of tomato seedlings. The bioactivity of HA was greatest (highest response in lateral roots at lowest concentration) while FA did not intensify root growth. Both HS and HA stimulated root hair formation. At low concentrations, HS and HA induced root hair formation near the root cap, a typical hormonal imbalance effect in plants. Transgenic tomato with reporter gene DR5::GUS allowed the observation that the auxin-related signalling pathway was involved in root growth promotion by HA.
Resumo:
Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management
Resumo:
Therapeutic drug monitoring (TDM) can be defined as the measurement of drug in biological samples to individualise treatment by adapting drug dose to improve efficacy and/or reduce toxicity. The cytotoxic drugs are characterised by steep dose-response relationships and narrow therapeutic windows. Inter-individual pharmacokinetic (PK) variability is often substantial. There are, however, a multitude of reasons why TDM has never been fully implemented in daily oncology practice. These include difficulties in establishing appropriate concentration target, common use of combination chemotherapies and the paucity of published data from pharmacological trials. The situation is different with targeted therapies. The large interindividual PK variability is influenced by the pharmacogenetic background of the patient (e.g. cytochrome P450 and ABC transporters polymorphisms), patient characteristics such as adherence to treatment and environmental factors (drug-drug interactions). Retrospective studies have shown that targeted drug exposure correlates with treatment response in various cancers. Evidence for imatinib currently exists, others are emerging for compounds including nilotinib, dasatinib, erlotinib, sunitinib, sorafenib and mammalian target of rapamycin (mTOR) inhibitors. Applications for TDM during oral targeted therapies may best be reserved for particular situations including lack of therapeutic response, severe or unexpected toxicities, anticipated drug-drug interactions and concerns over adherence treatment. There are still few data with monoclonal antibodies (mAbs) in favour of TDM approaches, even if data showed encouraging results with rituximab and cetuximab. TDM of mAbs is not yet supported by scientific evidence. Considerable effort should be made for targeted therapies to better define concentration-effect relationships and to perform comparative randomised trials of classic dosing versus pharmacokinetically-guided adaptive dosing.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.