273 resultados para Error-Free Transformations
Resumo:
How do plants that move and spread across landscapes become branded as weeds and thereby objects of contention and control? We outline a political ecology approach that builds on a Lefebvrian understanding of the production of space, identifying three scalar moments that make plants into 'weeds' in different spatial contexts and landscapes. The three moments are: the operational scale, which relates to empirical phenomena in nature and society; the observational scale, which defines formal concepts of these phenomena and their implicit or explicit 'biopower' across institutional and spatial categories; and the interpretive scale, which is communicated through stories and actions expressing human feelings or concerns regarding the phenomena and processes of socio-spatial change. Together, these three scalar moments interact to produce a political ecology of landscape transformation, where biophysical and socio-cultural processes of daily life encounter formal categories and modes of control as well as emotive and normative expectations in shaping landscapes. Using three exemplar 'weeds' - acacia, lantana and ambrosia - our political ecology approach to landscape transformations shows that weeds do not act alone and that invasives are not inherently bad organisms. Humans and weeds go together; plants take advantage of spaces and opportunities that we create. Human desires for preserving certain social values in landscapes in contradiction to actual transformations is often at the heart of definitions of and conflicts over weeds or invasives.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
PURPOSE: Because desmoid tumors exhibit an unpredictable clinical course, translational research is crucial to identify the predictive factors of progression in addition to the clinical parameters. The main issue is to detect patients who are at a higher risk of progression. The aim of this work was to identify molecular markers that can predict progression-free survival (PFS). EXPERIMENTAL DESIGN: Gene-expression screening was conducted on 115 available independent untreated primary desmoid tumors using cDNA microarray. We established a prognostic gene-expression signature composed of 36 genes. To test robustness, we randomly generated 1,000 36-gene signatures and compared their outcome association to our define 36-genes molecular signature and we calculated positive predictive value (PPV) and negative predictive value (NPV). RESULTS: Multivariate analysis showed that our molecular signature had a significant impact on PFS while no clinical factor had any prognostic value. Among the 1,000 random signatures generated, 56.7% were significant and none was more significant than our 36-gene molecular signature. PPV and NPV were high (75.58% and 81.82%, respectively). Finally, the top two genes downregulated in no-recurrence were FECH and STOML2 and the top gene upregulated in no-recurrence was TRIP6. CONCLUSIONS: By analyzing expression profiles, we have identified a gene-expression signature that is able to predict PFS. This tool may be useful for prospective clinical studies. Clin Cancer Res; 21(18); 4194-200. ©2015 AACR.
Resumo:
Abstract This work studies the multi-label classification of turns in simple English Wikipedia talk pages into dialog acts. The treated dataset was created and multi-labeled by (Ferschke et al., 2012). The first part analyses dependences between labels, in order to examine the annotation coherence and to determine a classification method. Then, a multi-label classification is computed, after transforming the problem into binary relevance. Regarding features, whereas (Ferschke et al., 2012) use features such as uni-, bi-, and trigrams, time distance between turns or the indentation level of the turn, other features are considered here: lemmas, part-of-speech tags and the meaning of verbs (according to WordNet). The dataset authors applied approaches such as Naive Bayes or Support Vector Machines. The present paper proposes, as an alternative, to use Schoenberg transformations which, following the example of kernel methods, transform original Euclidean distances into other Euclidean distances, in a space of high dimensionality. Résumé Ce travail étudie la classification supervisée multi-étiquette en actes de dialogue des tours de parole des contributeurs aux pages de discussion de Simple English Wikipedia (Wikipédia en anglais simple). Le jeu de données considéré a été créé et multi-étiqueté par (Ferschke et al., 2012). Une première partie analyse les relations entre les étiquettes pour examiner la cohérence des annotations et pour déterminer une méthode de classification. Ensuite, une classification supervisée multi-étiquette est effectuée, après recodage binaire des étiquettes. Concernant les variables, alors que (Ferschke et al., 2012) utilisent des caractéristiques telles que les uni-, bi- et trigrammes, le temps entre les tours de parole ou l'indentation d'un tour de parole, d'autres descripteurs sont considérés ici : les lemmes, les catégories morphosyntaxiques et le sens des verbes (selon WordNet). Les auteurs du jeu de données ont employé des approches telles que le Naive Bayes ou les Séparateurs à Vastes Marges (SVM) pour la classification. Cet article propose, de façon alternative, d'utiliser et d'étendre l'analyse discriminante linéaire aux transformations de Schoenberg qui, à l'instar des méthodes à noyau, transforment les distances euclidiennes originales en d'autres distances euclidiennes, dans un espace de haute dimensionnalité.
Resumo:
BACKGROUND/AIMS: Fibroblast growth factor 21 (FGF21) is a key mediator of glucose and lipid metabolism. However, the beneficial effects of exogenous FGF21 administration are attenuated in obese animals and humans with elevated levels of circulating free fatty acids (FFA). METHODS: We investigated in vitro how FFA impact FGF21 effects on hepatic lipid metabolism. RESULTS: In the absence of FFA, FGF21 reduced lipogenesis and increased lipid oxidation in HepG2 cells. Inhibition of lipogenesis was associated with a down regulation of SREBP-1c, FAS and SCD1. The lipid-lowering effect was associated with AMPK and ACC phosphorylation, and up regulation of CPT-1α expression. Further, FGF21 treatment reduced TNFα gene expression, suggesting a beneficial action of FGF21 on inflammation. In contrast, the addition of FFA abolished the positive effects of FGF21 on lipid metabolism. CONCLUSION: In the absence of FFA, FGF21 improves lipid metabolism in HepG2 cells and reduces the inflammatory cytokine TNFα. However, under high levels of FFA, FGF21 action on lipid metabolism and TNFα gene expression is impaired. Therefore, FFA impair FGF21 action in HepG2 cells potentially through TNFα.
Resumo:
BACKGROUND & AIMS: Parenteral methotrexate is an effective treatment for patients with Crohn's disease, but has never been adequately evaluated in patients with ulcerative colitis (UC). We conducted a randomized controlled trial to determine its safety and efficacy in patients with steroid-dependent UC. METHODS: We performed a double-blind, placebo-controlled trial to evaluate the efficacy of parenteral methotrexate (25 mg/wk) in 111 patients with corticosteroid-dependent UC at 26 medical centers in Europe from 2007 through 2013. Patients were given prednisone (10 to 40 mg/d) when the study began and were randomly assigned to groups (1:1) given placebo or methotrexate (intramuscularly or subcutaneously, 25 mg weekly) for 24 weeks. The primary end point was steroid-free remission (defined as a Mayo score ≤2 with no item >1 and complete withdrawal of steroids) at week 16. Secondary endpoints included clinical remission (defined as a Mayo clinical subscore ≤2 with no item >1) and endoscopic healing without steroids at weeks 16 and/or 24, remission without steroids at week 24, and remission at both weeks 16 and 24. RESULTS: Steroid-free remission at week 16 was achieved by 19 of 60 patients given methotrexate (31.7%) and 10 of 51 patients given placebo (19.6%)-a difference of 12.1% (95% confidence interval [CI]: -4.0% to 28.1%; P = .15). The proportion of patients in steroid-free clinical remission at week 16 was 41.7% in the methotrexate group and 23.5% in the placebo group, for a difference of 18.1% (95% CI: 1.1% to 35.2%; P = .04). The proportions of patients with steroid-free endoscopic healing at week 16 were 35% in the methotrexate group and 25.5% in the placebo group-a difference of 9.5% (95% CI: -7.5% to 26.5%; P = .28). No differences were observed in other secondary end points. More patients receiving placebo discontinued the study because of adverse events (47.1%), mostly caused by UC, than patients receiving methotrexate (26.7%; P = .03). A higher proportion of patients in the methotrexate group had nausea and vomiting (21.7%) than in the placebo group (3.9%; P = .006). CONCLUSIONS: In a randomized controlled trial, parenteral methotrexate was not superior to placebo for induction of steroid-free remission in patients with UC. However, methotrexate induced clinical remission without steroids in a significantly larger percentage of patients, resulting in fewer withdrawals from therapy due to active UC. ClinicalTrials.gov ID NCT00498589.
Resumo:
BACKGROUND: Biliary tract cancer is an uncommon cancer with a poor outcome. We assembled data from the National Cancer Research Institute (UK) ABC-02 study and 10 international studies to determine prognostic outcome characteristics for patients with advanced disease. METHODS: Multivariable analyses of the final dataset from the ABC-02 study were carried out. All variables were simultaneously included in a Cox proportional hazards model, and backward elimination was used to produce the final model (using a significance level of 10%), in which the selected variables were associated independently with outcome. This score was validated externally by receiver operating curve (ROC) analysis using the independent international dataset. RESULTS: A total of 410 patients were included from the ABC-02 study and 753 from the international dataset. An overall survival (OS) and progression-free survival (PFS) Cox model was derived from the ABC-02 study. White blood cells, haemoglobin, disease status, bilirubin, neutrophils, gender, and performance status were considered prognostic for survival (all with P < 0.10). Patients with metastatic disease {hazard ratio (HR) 1.56 [95% confidence interval (CI) 1.20-2.02]} and Eastern Cooperative Oncology Group performance status (ECOG PS) 2 had worse survival [HR 2.24 (95% CI 1.53-3.28)]. In a dataset restricted to patients who received cisplatin and gemcitabine with ECOG PS 0 and 1, only haemoglobin, disease status, bilirubin, and neutrophils were associated with PFS and OS. ROC analysis suggested the models generated from the ABC-02 study had a limited prognostic value [6-month PFS: area under the curve (AUC) 62% (95% CI 57-68); 1-year OS: AUC 64% (95% CI 58-69)]. CONCLUSION: These data propose a set of prognostic criteria for outcome in advanced biliary tract cancer derived from the ABC-02 study that are validated in an international dataset. Although these findings establish the benchmark for the prognostic evaluation of patients with ABC and confirm the value of longheld clinical observations, the ability of the model to correctly predict prognosis is limited and needs to be improved through identification of additional clinical and molecular markers.
Resumo:
BACKGROUND: Autologous blood transfusion (ABT) efficiently increases sport performance and is the most challenging doping method to detect. Current methods for detecting this practice center on the plasticizer di(2-ethlyhexyl) phthalate (DEHP), which enters the stored blood from blood bags. Quantification of this plasticizer and its metabolites in urine can detect the transfusion of autologous blood stored in these bags. However, DEHP-free blood bags are available on the market, including n-butyryl-tri-(n-hexyl)-citrate (BTHC) blood bags. Athletes may shift to using such bags to avoid the detection of urinary DEHP metabolites. STUDY DESIGN AND METHODS: A clinical randomized double-blinded two-phase study was conducted of healthy male volunteers who underwent ABT using DEHP-containing or BTHC blood bags. All subjects received a saline injection for the control phase and a blood donation followed by ABT 36 days later. Kinetic excretion of five urinary DEHP metabolites was quantified with liquid chromatography coupled with tandem mass spectrometry. RESULTS: Surprisingly, considerable levels of urinary DEHP metabolites were observed up to 1 day after blood transfusion with BTHC blood bags. The long-term metabolites mono-(2-ethyl-5-carboxypentyl) phthalate and mono-(2-carboxymethylhexyl) phthalate were the most sensitive biomarkers to detect ABT with BTHC blood bags. Levels of DEHP were high in BTHC bags (6.6%), the tubing in the transfusion kit (25.2%), and the white blood cell filter (22.3%). CONCLUSIONS: The BTHC bag contained DEHP, despite being labeled DEHP-free. Urinary DEHP metabolite measurement is a cost-effective way to detect ABT in the antidoping field even when BTHC bags are used for blood storage.