10 resultados para supported aqueous-phase catalyst
em Université de Lausanne, Switzerland
Resumo:
Petroleum hydrocarbons are common contaminants in marine and freshwater aquatic habitats, often occurring as a result of oil spillage. Rapid and reliable on-site tools for measuring the bioavailable hydrocarbon fractions, i.e., those that are most likely to cause toxic effects or are available for biodegradation, would assist in assessing potential ecological damage and following the progress of cleanup operations. Here we examined the suitability of a set of different rapid bioassays (2-3 h) using bacteria expressing the LuxAB luciferase to measure the presence of short-chain linear alkanes, monoaromatic and polyaromatic compounds, biphenyls, and DNA-damaging agents in seawater after a laboratory-scale oil spill. Five independent spills of 20 mL of NSO-1 crude oil with 2 L of seawater (North Sea or Mediterranean Sea) were carried out in 5 L glass flasks for periods of up to 10 days. Bioassays readily detected ephemeral concentrations of short-chain alkanes and BTEX (i.e., benzene, toluene, ethylbenzene, and xylenes) in the seawater within minutes to hours after the spill, increasing to a maximum of up to 80 muM within 6-24 h, after which they decreased to low or undetectable levels. The strong decrease in short-chain alkanes and BTEX may have been due to their volatilization or biodegradation, which was supported by changes in the microbial community composition. Two- and three-ring PAHs appeared in the seawater phase after 24 h with a concentration up to 1 muM naphthalene equivalents and remained above 0.5 muM for the duration of the experiment. DNA-damage-sensitive bioreporters did not produce any signal with the oil-spilled aqueous-phase samples, whereas bioassays for (hydroxy)biphenyls showed occasional responses. Chemical analysis for alkanes and PAHs in contaminated seawater samples supported the bioassay data, but did not show the typical ephemeral peaks observed with the bioassays. We conclude that bacterium-based bioassays can be a suitable alternative for rapid on-site quantitative measurement of hydrocarbons in seawater.
Resumo:
Genetically constructed microbial biosensors for measuring organic pollutants are mostly applied in aqueous samples. Unfortunately, the detection limit of most biosensors is insufficient to detect pollutants at low but environmentally relevant concentrations. However, organic pollutants with low levels of water solubility often have significant gas-water partitioning coefficients, which in principle makes it possible to measure such compounds in the gas rather than the aqueous phase. Here we describe the first use of a microbial biosensor for measuring organic pollutants directly in the gas phase. For this purpose, we reconstructed a bioluminescent Pseudomonas putida naphthalene biosensor strain to carry the NAH7 plasmid and a chromosomally inserted gene fusion between the sal promoter and the luxAB genes. Specific calibration studies were performed with suspended and filter-immobilized biosensor cells, in aqueous solution and in the gas phase. Gas phase measurements with filter-immobilized biosensor cells in closed flasks, with a naphthalene-contaminated aqueous phase, showed that the biosensor cells can measure naphthalene effectively. The biosensor cells on the filter responded with increasing light output proportional to the naphthalene concentration added to the water phase, even though only a small proportion of the naphthalene was present in the gas phase. In fact, the biosensor cells could concentrate a larger proportion of naphthalene through the gas phase than in the aqueous suspension, probably due to faster transport of naphthalene to the cells in the gas phase. This led to a 10-fold lower detectable aqueous naphthalene concentration (50 nM instead of 0.5 micro M). Thus, the use of bacterial biosensors for measuring organic pollutants in the gas phase is a valid method for increasing the sensitivity of these valuable biological devices.
Resumo:
Glycosyl-inositolphospholipid (GPL) anchoring structures are incorporated into GPL-anchored proteins immediately posttranslationally in the rough endoplasmic reticulum, but the biochemical and cellular constituents involved in this "glypiation" process are unknown. To establish whether glypiation could be achieved in vitro, mRNAs generated by transcription of cDNAs encoding two GPL-anchored proteins, murine Thy-1 antigen and human decay-accelerating factor (DAF), and a conventionally anchored control protein, polymeric-immunoglobulin receptor (IgR), were translated in a rabbit reticulocyte lysate. Upon addition of dog pancreatic rough microsomes, nascent polypeptides generated from the three mRNAs translocated into vesicles. Dispersal of the vesicles with Triton X-114 detergent and incubation of the hydrophobic phase with phosphatidylinositol-specific phospholipases C and D, enzymes specific for GPL-anchor structures, released Thy-1 and DAF but not IgR protein into the aqueous phase. The selective incorporation of phospholipase-sensitive anchoring moieties into Thy-1 and DAF but not IgR translation products during in vitro translocation indicates that rough microsomes are able to support and regulate glypiation.
Resumo:
The purpose of this study was to design microspheres combining sustained delivery and enhanced intracellular penetration for ocular administration of antisense oligonucleotides. Nanosized complexes of antisense TGF-beta2 phosphorothioate oligonucleotides (PS-ODN) with polyethylenimine (PEI), and naked PS-ODN were encapsulated into poly(lactide-co-glycolide) microspheres prepared by the double-emulsion solvent evaporation method. The PS-ODN was introduced either naked or complexed in the inner aqueous phase of the first emulsion. We observed a marked influence of microsphere composition on porosity, size distribution and PS-ODN encapsulation efficiency. Mainly, the presence of PEI induced the formation of large pores observed onto microsphere surface. Introduction of NaCl in the outer aqueous phase increased the encapsulation efficiency and reduced microsphere porosity. In vitro release kinetic of PS-ODN was also investigated. Clearly, the higher the porosity, the faster was the release and the higher was the burst effect. Using an analytical solution of Fick's second law of diffusion, it was shown that the early phase of PS-ODN and PS-ODN-PEI complex release was primarily controlled by pure diffusion, irrespectively of the type of microsphere. Finally, microspheres containing antisense TGF-beta2 nanosized complexes were shown, after subconjunctival administration to rabbit, to significantly increase intracellular penetration of ODN in conjunctival cells and subsequently to improve bleb survival in a rabbit experimental model of filtering surgery. These results open up interesting prospective for the local controlled delivery of genetic material into the eye.
Resumo:
The polycyclic aromatic hydrocarbon (PAH)-degrading strain Burkholderia sp. RP007 served as host strain for the design of a bacterial biosensor for the detection of phenanthrene. RP007 was transformed with a reporter plasmid containing a transcriptional fusion between the phnS putative promoter/operator region and the gene encoding the enhanced green fluorescent protein (GFP). The resulting bacterial biosensor--Burkholderia sp. strain RP037--produced significant amounts of GFP after batch incubation in the presence of phenanthrene crystals. Co-incubation with acetate did not disturb the phenanthrene-specific response but resulted in a homogenously responding population of cells. Active metabolism was required for induction with phenanthrene. The magnitude of GFP induction was influenced by physical parameters affecting the phenanthrene flux to the cells, such as the contact surface area between solid phenanthrene and the aqueous phase, addition of surfactant, and slow phenanthrene release from Model Polymer Release System beads or from a water-immiscible oil. These results strongly suggest that the bacterial biosensor can sense different phenanthrene fluxes while maintaining phenanthrene metabolism, thus acting as a genuine sensor for phenanthrene bioavailability. A relationship between GFP production and phenanthrene mass transfer is proposed.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Purpose/Objective(s): RTwith TMZ is the standard for GBM. dd TMZ causes prolongedMGMTdepletion in mononuclear cells and possibly in tumor. The RTOG 0525 trial (ASCO 2011) did not show an advantage from dd TMZ for survival or progression free survival. We conducted exploratory, hypothesis-generating subset analyses to detect possible benefit from dd TMZ.Materials/Methods: Patients were randomized to std (150-200 mg/m2 x 5 d) or dd TMZ (75-100 mg/m2 x 21 d) q 4 weeks for 6- 12 cycles. Eligibility included age.18, KPS$ 60, and. 1 cm2 tissue for prospective MGMTanalysis for stratification. Furtheranalyses were performed for all randomized patients (''intent-to-treat'', ITT), and for all patients starting protocol therapy (SPT). Subset analyses were performed by RPA class (III, IV, V), KPS (90-100, = 50,\50), resection (partial, total), gender (female, male), and neurologic dysfunction (nf = none, minor, moderate).Results: No significant difference was seen for median OS (16.6 vs. 14.9 months), or PFS (5.5 vs. 6.7 months, p = 0.06). MGMT methylation was linked to improved OS (21.2 vs. 14 months, p\0.0001), and PFS (8.7 vs. 5.7 months, p\0.0001). For the ITT (n = 833), there was no OS benefit from dd TMZ in any subset. Two subsets showed a PFS benefit for dd TMZ: RPA class III (6.2 vs. 12.6 months, HR 0.69, p = 0.03) and nf = minor (HR 0.77, p = 0.01). For RPA III, dd dramatically delayed progression, but post-progression dd patients died more quickly than std. A similar pattern for nf = minor was observed. For the SPT group (n = 714) there was neither PFS nor OS benefit for dd TMZ, overall. For RPA class III and nf = minor, there was a PFS benefit for dd TMZ (HR 0.73, p = 0.08; HR 0.77, p = 0.02). For nf = moderate subset, both ITT and SPT, the std arm showed superior OS (14.4 vs. 10.9 months) compared to dd, without improved PFS (HR 1.46, p = 0.03; and HR 1.74, p = 0.01. In terms of methylation status within this subset, there were more methylated patients in the std arm of the ITT subset (n = 159; 32 vs. 24%). For the SPT subset (n = 124), methylation status was similar between arms.Conclusions: This study did not demonstrate improved OS for dd TMZ for any subgroup, but for 2 highly functional subgroups, PFS was significantly increased. These data generate the testable hypothesis that intensive treatment may selectively improve disease control in those most likely able to tolerate dd therapy. Interpretation of this should be considered carefully due to small sample size, the process of multiple observations, and other confounders.Acknowledgment: This project was supported by RTOG grant U10 CA21661, and CCOP grant U10 CA37422 from the National Cancer Institute (NCI).
Resumo:
The application of two approaches for high-throughput, high-resolution X-ray phase contrast tomographic imaging being used at the tomographic microscopy and coherent radiology experiments (TOMCAT) beamline of the SLS is discussed and illustrated. Differential phase contrast (DPC) imaging, using a grating interferometer and a phase-stepping technique, is integrated into the beamline environment at TOMCAT in terms of the fast acquisition and reconstruction of data and the availability to scan samples within an aqueous environment. A second phase contrast method is a modified transfer of intensity approach that can yield the 3D distribution of the decrement of the refractive index of a weakly absorbing object from a single tomographic dataset. The two methods are complementary to one another: the DPC method is characterised by a higher sensitivity and by moderate resolution with larger samples; the modified transfer of intensity approach is particularly suited for small specimens when high resolution (around 1 mu m) is required. Both are being applied to investigations in the biological and materials science fields.
Resumo:
Solid-phase extraction (SPE) in tandem with dispersive liquid-liquid microextraction (DLLME) has been developed for the determination of mononitrotoluenes (MNTs) in several aquatic samples using gas chromatography-flame ionization (GC-FID) detection system. In the hyphenated SPE-DLLME, initially MNTs were extracted from a large volume of aqueous samples (100 mL) into a 500-mg octadecyl silane (C(18) ) sorbent. After the elution of analytes from the sorbent with acetonitrile, the obtained solution was put under the DLLME procedure, so that the extra preconcentration factors could be achieved. The parameters influencing the extraction efficiency such as breakthrough volume, type and volume of the elution solvent (disperser solvent) and extracting solvent, as well as the salt addition, were studied and optimized. The calibration curves were linear in the range of 0.5-500 μg/L and the limit of detection for all analytes was found to be 0.2 μg/L. The relative standard deviations (for 0.75 μg/L of MNTs) without internal standard varied from 2.0 to 6.4% (n=5). The relative recoveries of the well, river and sea water samples, spiked at the concentration level of 0.75 μg/L of the analytes, were in the range of 85-118%.
Resumo:
BACKGROUND: Patients with BM rarely survive .6 months and are commonly excluded from clinical trials. We aimed at improving outcome by exploring 2 combined modality regimens with at the time novel agents for which single-agent activity had been shown. METHODS: NSCLC patients with multiple BM were randomized to WBRT (10 × 3 Gy) and either GFT 250 mg p.o. daily or TMZ 75 mg/m2 p.o. daily ×21/28 days, starting on Day 1 of RT and to be continued until PD. Primary endpoint was overall survival, a Simon's optimal 2-stage design was based on assumptions for the 3-month survival rate. Cognitive functioning and quality of life were also evaluated. RESULTS: Fifty-nine patients (36 M, 23 F; 9 after prior chemo) were included. Median age was 61 years (range 46-82), WHO PS was 0 in 18 patients, 1 in 31 patients, and 2 in 10 patients. All but 1 patients had extracranial disease; 33 of 43 (TMZ) and 15 of 16 (GFT) had adenocarcinoma histology. GFT arm was closed early after stage 1 analysis when the prespecified 3-mo survival rate threshold (66%) was not reached, causes of death were not GFT related. Main causes of death were PD in the CNS 24%, systemic 41%, both 8%, and toxicity 10% [intestinal perforation (2 patients), pneumonia (2), pulmonary emboli (1), pneumonitis NOS (1), seizure (1)]. We summarize here other patients' characteristics for the 2 trial arms: TMZ (n ¼ 43)/GFT (n ¼ 16); median treatment duration: 1.6 /1.8 mo; Grade 3-4 toxicity: lymphopenia 5 patients (12%)/0; fatigue 8 patients (19%)/2 patients (13%). Survival data for TMZ/GFT arms: 3-month survival rate: 58.1% (95% CI 42.1-73)/62.5% (95% CI 35- 85); median OS: 4.9 months (95% CI 2.5-5.6)/6.3 months (95% CI 2.2- 14.6); median PFS: 1.8 months (95% CI 1.5-1.8)/1.8 (95% CI 1.1-3.9); median time to neurol. progr.: 8.0 months (95% CI 2.2-X)/4.8 (95% CI 3.9-10.5). In a model to predict survival time including the variables' age, PS, number of BM, global QL, total MMSE score, and subjective cognitive function, none of the variables accounted for a significant improvement in survival time. CONCLUSIONS: The combinations of WBRT with GFT or TMZ were feasible. However, in this unselected patient population, survival remains poor and a high rate of complication was observed. Four patients died as a result of high-dose corticosteroids. Preliminary evaluation of cognitive function andQL failed to show significant improvement. Indications and patient selection for palliative treatment should be revisited and careful monitoring and supportive care is required. Research and progress for this frequent clinical situation is urgently needed. Trial partly supported by AstraZeneca (Switzerland), Essex Chemie (Switzerland) and Swiss Federal Government.