963 resultados para Step response analysis
Resumo:
Purpose: Studies on large animal models are an important step to test new therapeutical strategies before human application. Considering the importance of cone function for human vision and the paucity of large animal models for cone dystrophies having an enriched cone region, we propose to develop a pig model for cone degeneration. With a lentiviral-directed transgenesis, we obtained pigs transgenic for a cone-dominant mutant gene described in a human cone dystrophy.Methods: Lentiviral vectors encoding the human double mutant GUCY2DE837D/R838S cDNA under the control of a region of the pig arrestin-3 promoter (Arr3) was produced and used for lentiviral-derived transgenesis in pigs. PCR-genotyping and southern blotting determined the genotype of pigs born after injection of the vector at the zygote stage. Retina function analysis was performed by ERG and behavioral tests at 11, 24 and 54 weeks of age. OCT and histological analyses were performed to describe the retina morphology.Results: The ratio of transgenic pigs born after lentiviral-directed transgenesis was close to 50%. Transgenic pigs with 3 to 5 transgene copies per cell clearly present a reduced photopic response from 3 months of age on. Except for one pig, which has 6 integrated transgene copies, no dramatic decrease in general mobility was observed even at 6 months of age. OCT examinations reveal no major changes in the ONL structure of the 6-months old pigs. The retina morphology was well conserved in the 2 pigs sacrificed (3 and 6 months old) except a noticeable displacement of some cone nuclei in the outer segment layer.Conclusions: Lentiviral-directed transgenesis is a rapid and straightforward method to engineer transgenic pigs. Some Arr3-GUCY2DE837D/R838S pigs show signs of retinal dysfunction but further work is needed to describe the progression of the disease in this model.
Resumo:
BACKGROUND & AIMS: Age is frequently discussed as negative host factor to achieve a sustained virological response (SVR) to antiviral therapy of chronic hepatitis C. However, elderly patients often show advanced fibrosis/cirrhosis as known negative predictive factor. The aim of this study was to assess age as an independent predictive factor during antiviral therapy. METHODS: Overall, 516 hepatitis C patients were treated with pegylated interferon-α and ribavirin, thereof 66 patients ≥60 years. We analysed the impact of host factors (age, gender, fibrosis, haemoglobin, previous hepatitis C treatment) and viral factors (genotype, viral load) on SVR per therapy course by performing a generalized estimating equations (GEE) regression modelling, a matched pair analysis and a classification tree analysis. RESULTS: Overall, SVR per therapy course was 42.9 and 26.1%, respectively, in young and elderly patients with hepatitis C virus (HCV) genotypes 1/4/6. The corresponding figures for HCV genotypes 2/3 were 74.4 and 84%. In the GEE model, age had no significant influence on achieving SVR. In matched pair analysis, SVR was not different in young and elderly patients (54.2 and 55.9% respectively; P = 0.795 in binominal test). In classification tree analysis, age was not a relevant splitting variable. CONCLUSIONS: Age is not a significant predictive factor for achieving SVR, when relevant confounders are taken into account. As life expectancy in Western Europe at age 60 is more than 20 years, it is reasonable to treat chronic hepatitis C in selected elderly patients with relevant fibrosis or cirrhosis but without major concomitant diseases, as SVR improves survival and reduces carcinogenesis.
Resumo:
Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
The transcription factor serum response factor (SRF) plays a crucial role in the development of several organs. However, its role in the skin has not been explored. Here, we show that keratinocytes in normal human and mouse skin expressed high levels of SRF but that SRF expression was strongly downregulated in the hyperproliferative epidermis of wounded and psoriatic skin. Keratinocyte-specific deletion within the mouse SRF locus during embryonic development caused edema and skin blistering, and all animals died in utero. Postnatal loss of mouse SRF in keratinocytes resulted in the development of psoriasis-like skin lesions. These lesions were characterized by inflammation, hyperproliferation, and abnormal differentiation of keratinocytes as well as by disruption of the actin cytoskeleton. Ultrastructural analysis revealed markedly reduced cell-cell and cell-matrix contacts and loss of cell compaction in all epidermal layers. siRNA-mediated knockdown of SRF in primary human keratinocytes revealed that the cytoskeletal abnormalities and adhesion defects were a direct consequence of the loss of SRF. In contrast, the hyperproliferation observed in vivo was an indirect effect that was most likely a consequence of the inflammation. These results reveal that loss of SRF disrupts epidermal homeostasis and strongly suggest its involvement in the pathogenesis of hyperproliferative skin diseases, including psoriasis.
Resumo:
Background: Chemoreception is a widespread mechanism that is involved in critical biologic processes, including individual and social behavior. The insect peripheral olfactory system comprises three major multigene families: the olfactory receptor (Or), the gustatory receptor (Gr), and the odorant-binding protein (OBP) families. Members of the latter family establish the first contact with the odorants, and thus constitute the first step in the chemosensory transduction pathway.Results: Comparative analysis of the OBP family in 12 Drosophila genomes allowed the identification of 595 genes that encode putative functional and nonfunctional members in extant species, with 43 gene gains and 28 gene losses (15 deletions and 13 pseudogenization events). The evolution of this family shows tandem gene duplication events, progressive divergence in DNA and amino acid sequence, and prevalence of pseudogenization events in external branches of the phylogenetic tree. We observed that the OBP arrangement in clusters is maintained across the Drosophila species and that purifying selection governs the evolution of the family; nevertheless, OBP genes differ in their functional constraints levels. Finally, we detect that the OBP repertoire evolves more rapidly in the specialist lineages of the Drosophila melanogaster group (D. sechellia and D. erecta) than in their closest generalists.Conclusion: Overall, the evolution of the OBP multigene family is consistent with the birth-and-death model. We also found that members of this family exhibit different functional constraints, which is indicative of some functional divergence, and that they might be involved in some of the specialization processes that occurred through the diversification of the Drosophila genus.
Resumo:
Peroxisome proliferator activated receptors are ligand activated transcription factors belonging to the nuclear hormone receptor superfamily. Three cDNAs encoding such receptors have been isolated from Xenopus laevis (xPPAR alpha, beta, and gamma). Furthermore, the gene coding for xPPAR beta has been cloned, thus being the first member of this subfamily whose genomic organization has been solved. Functionally, xPPAR alpha as well as its mouse and rat homologs are thought to play an important role in lipid metabolism due to their ability to activate transcription of a reporter gene through the promoter of the acyl-CoA oxidase (ACO) gene. ACO catalyzes the rate limiting step in the peroxisomal beta-oxidation of fatty acids. Activation is achieved by the binding of xPPAR alpha on a regulatory element (DR1) found in the promoter region of this gene, xPPAR beta and gamma are also able to recognize the same type of element and are, as PPAR alpha, able to form heterodimers with retinoid X receptor. All three xPPARs appear to be activated by synthetic peroxisome proliferators as well as by naturally occurring fatty acids, suggesting that a common mode of action exists for all the members of this subfamily of nuclear hormone receptors.
Resumo:
In response to the mandate on Load and Resistance Factor Design (LRFD) implementations by the Federal Highway Administration (FHWA) on all new bridge projects initiated after October 1, 2007, the Iowa Highway Research Board (IHRB) sponsored these research projects to develop regional LRFD recommendations. The LRFD development was performed using the Iowa Department of Transportation (DOT) Pile Load Test database (PILOT). To increase the data points for LRFD development, develop LRFD recommendations for dynamic methods, and validate the results ofLRFD calibration, 10 full-scale field tests on the most commonly used steel H-piles (e.g., HP 10 x 42) were conducted throughout Iowa. Detailed in situ soil investigations were carried out, push-in pressure cells were installed, and laboratory soil tests were performed. Pile responses during driving, at the end of driving (EOD), and at re-strikes were monitored using the Pile Driving Analyzer (PDA), following with the CAse Pile Wave Analysis Program (CAPWAP) analysis. The hammer blow counts were recorded for Wave Equation Analysis Program (WEAP) and dynamic formulas. Static load tests (SLTs) were performed and the pile capacities were determined based on the Davisson’s criteria. The extensive experimental research studies generated important data for analytical and computational investigations. The SLT measured loaddisplacements were compared with the simulated results obtained using a model of the TZPILE program and using the modified borehole shear test method. Two analytical pile setup quantification methods, in terms of soil properties, were developed and validated. A new calibration procedure was developed to incorporate pile setup into LRFD.
Resumo:
The slow-phase velocity of nystagmus is one of the most sensitive parameters of vestibular function and is currently the standard for evaluating the caloric test. However, the assessment of this parameter requires recording the response by using nystagmography. The aim of this study was to evaluate whether frequency and duration of the caloric nystagmus, as measured by using a clinical test with Frenzel glasses, could predict the result of the recorded test. The retrospective analysis of 222 caloric test results recorded by means of electronystagmography has shown a good association between the 3 parameters for unilateral weakness. The asymmetry observed in the velocity can be predicted by a combination of frequency and duration. On the other hand, no relationship was observed between the parameters for directional preponderance. These results indicate that a clinical caloric test with frequency and duration as parameters can be used to predict the unilateral weakness, which would be obtained by use of nystagmography. We propose an evaluation of the caloric test on the basis of diagrams combining the 3 response parameters.
Resumo:
A major issue in the application of waveform inversion methods to crosshole georadar data is the accurate estimation of the source wavelet. Here, we explore the viability and robustness of incorporating this step into a time-domain waveform inversion procedure through an iterative deconvolution approach. Our results indicate that, at least in non-dispersive electrical environments, such an approach provides remarkably accurate and robust estimates of the source wavelet even in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity. Our results also indicate that the proposed source wavelet estimation approach is relatively insensitive to ambient noise and to the phase characteristics of the starting wavelet. Finally, there appears to be little-to-no trade-off between the wavelet estimation and the tomographic imaging procedures.
Resumo:
Epstein-Barr virus (EBV) has been associated with multiple sclerosis (MS), however, most studies examining the relationship between the virus and the disease have been based on serologies, and if EBV is linked to MS, CD8+ T cells are likely to be involved as they are important both in MS pathogenesis and in controlling viruses. We hypothesized that valuable information on the link between MS and EBV would be ascertained from the study of frequency and activation levels of EBV-specific CD8+ T cells in different categories of MS patients and control subjects. We investigated EBV-specific cellular immune responses using proliferation and enzyme linked immunospot assays, and humoral immune responses by analysis of anti-EBV antibodies, in a cohort of 164 subjects, including 108 patients with different stages of MS, 35 with other neurological diseases and 21 healthy control subjects. Additionally, the cohort were all tested against cytomegalovirus (CMV), another neurotropic herpes virus not convincingly associated with MS, nor thought to be deleterious to the disease. We corrected all data for age using linear regression analysis over the total cohorts of EBV- and CMV-infected subjects. In the whole cohort, the rate of EBV and CMV infections were 99% and 51%, respectively. The frequency of IFN-gamma secreting EBV-specific CD8+ T cells in patients with clinically isolated syndrome (CIS) was significantly higher than that found in patients with relapsing-remitting MS (RR-MS), secondary-progressive MS, primary-progressive MS, patients with other neurological diseases and healthy controls. The shorter the interval between MS onset and our assays, the more intense was the EBV-specific CD8+ T-cell response. Confirming the above results, we found that EBV-specific CD8+ T-cell responses decreased in 12/13 patients with CIS followed prospectively for 1.0 +/- 0.2 years. In contrast, there was no difference between categories for EBV-specific CD4+ T cell, or for CMV-specific CD4+ and CD8+ T-cell responses. Anti-EBV-encoded nuclear antigen-1 (EBNA-1)-specific antibodies correlated with EBV-specific CD8+ T cells in patients with CIS and RR-MS. However, whereas EBV-specific CD8+ T cells were increased the most in early MS, EBNA-1-specific antibodies were increased in early as well as in progressive forms of MS. Our data show high levels of CD8+ T-cell activation against EBV--but not CMV--early in the course of MS, which support the hypothesis that EBV might be associated with the onset of this disease.
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
Based on the case of reforms aimed at integrating the provision of income protection and employment services for jobless people in Europe, this thesis seeks to understand the reasons which may prompt governments to engage in large-scale organisational reforms. Over the last 20 years, several European countries have indeed radically redesigned the organisational structure of their welfare state by merging or bundling existing front-line offices in charge of benefit payment and employment services together into 'one-stop' agencies. Whereas in academic and political debates, these reforms are generally presented as a necessary and rational response to the problems and inconsistencies induced by fragmentation in a context of the reorientation of welfare states towards labour market activation, this thesis shows that the agenda setting of these reforms is in fact the result of multidimensional political dynamics. More specifically, the main argument of this thesis is that these reforms are best understood not so such from the problems induced by organisational compartmentalism, whose political recognition is often controversial, but from the various goals that governments may simultaneously achieve by means of their adoption. This argument is tested by comparing agenda-setting processes of large-scale reforms of coordination in the United Kingdom (Jobcentre Plus), Germany (Hartz IV reform) and Denmark (2005 Jobcentre reform), and contrasting them with the Swiss case where the government has so far rejected any coordination initiative involving organisational redesign. This comparison brings to light the importance, for the rise of organisational reforms, of the possibility to couple them with the following three goals: first, goals related to the strengthening of activation policies; second, institutional goals seeking to redefine the balance of responsibilities between the central state and non-state actors, and finally electoral goals for governments eager to maintain political credibility. The decisive role of electoral goals in the three countries suggests that these reforms are less bound by partisan politics than by the particular pressures facing governments arrived in office after long periods in opposition.
Resumo:
Introduction ICM+ software encapsulates our 20 years' experience in brain monitoring. It collects data from a variety of bedside monitors and produces time trends of parameters defi ned using confi gurable mathematical formulae. To date it is being used in nearly 40 clinical research centres worldwide. We present its application for continuous monitoring of cerebral autoregulation using near-infrared spectroscopy (NIRS). Methods Data from multiple bedside monitors are processed by ICM+ in real time using a large selection of signal processing methods. These include various time and frequency domain analysis functions as well as fully customisable digital fi lters. The fi nal results are displayed in a variety of ways including simple time trends, as well as time window based histograms, cross histograms, correlations, and so forth. All this allows complex information from bedside monitors to be summarized in a concise fashion and presented to medical and nursing staff in a simple way that alerts them to the development of various pathological processes. Results One hundred and fi fty patients monitored continuously with NIRS, arterial blood pressure (ABP) and intracranial pressure (ICP), where available, were included in this study. There were 40 severely headinjured adult patients, 27 SAH patients (NCCU, Cambridge); 60 patients undergoing cardiopulmonary bypass (John Hopkins Hospital, Baltimore) and 23 patients with sepsis (University Hospital, Basel). In addition, MCA fl ow velocity (FV) was monitored intermittently using transcranial Doppler. FV-derived and ICP-derived pressure reactivity indices (PRx, Mx), as well as NIRS-derived reactivity indices (Cox, Tox, Thx) were calculated and showed signifi cant correlation with each other in all cohorts. Errorbar charts showing reactivity index PRx versus CPP (optimal CPP chart) as well as similar curves for NIRS indices versus CPP and ABP were also demonstrated. Conclusions ICM+ software is proving to be a very useful tool for enhancing the battery of available means for monitoring cerebral vasoreactivity and potentially facilitating autoregulation guided therapy. Complexity of data analysis is also hidden inside loadable profi les, thus allowing investigators to take full advantage of validated protocols including advanced processing formulas.
Resumo:
A multiwell plate bioassay was developed using genetically modified bacteria (bioreporter cells) to detect inorganic arsenic extracted from rice. The bacterial cells expressed luciferase upon exposure to arsenite, the activity of which was detected by measurement of cellular bioluminescence. The bioreporter cells detected arsenic in all rice varieties tested, with averages of 0.02-0.15 microg of arsenite equivalent per gram of dry weight and a method detection limit of 6 ng of arsenite per gram of dry rice. This amounted to between approximately 20 and 90% of the total As content reported by chemical methods for the same sample and suggested that a major proportion of arsenic in rice is in the inorganic form. Calibrations of the bioassay with pure inorganic and organic arsenic forms showed that the bacterial cells react to arsenite with highest affinity, followed by arsenate (with 25% response relative to an equivalent arsenite concentration) and trimethylarsine oxide (at 10% relative response). A method for biocompatible arsenic extraction was elaborated, which most optimally consisted of (i) grinding rice to powder, (ii) mixing with an aqueous solution containing pancreatic enzymes, (iii) mechanical shearing, (iv) extraction in mild acid conditions and moderate heat, and (v) centrifugation and pH neutralization. Detection of mainly inorganic arsenic by the bacterial cells may have important advantages for toxicity assessment of rice consumption and would form a good complement to total chemical arsenic determination.