90 resultados para hydraulic design
em Université de Lausanne, Switzerland
Resumo:
Double trouble: A hybrid organic-inorganic (organometallic) inhibitor was designed to target glutathione transferases. The metal center is used to direct protein binding, while the organic moiety acts as the active-site inhibitor (see picture). The mechanism of inhibition was studied using a range of biophysical and biochemical methods.
Resumo:
Locating new wind farms is of crucial importance for energy policies of the next decade. To select the new location, an accurate picture of the wind fields is necessary. However, characterizing wind fields is a difficult task, since the phenomenon is highly nonlinear and related to complex topographical features. In this paper, we propose both a nonparametric model to estimate wind speed at different time instants and a procedure to discover underrepresented topographic conditions, where new measuring stations could be added. Compared to space filling techniques, this last approach privileges optimization of the output space, thus locating new potential measuring sites through the uncertainty of the model itself.
Resumo:
Time-lapse crosshole ground-penetrating radar (GPR) data, collected while infiltration occurs, can provide valuable information regarding the hydraulic properties of the unsaturated zone. In particular, the stochastic inversion of such data provides estimates of parameter uncertainties, which are necessary for hydrological prediction and decision making. Here, we investigate the effect of different infiltration conditions on the stochastic inversion of time-lapse, zero-offset-profile, GPR data. Inversions are performed using a Bayesian Markov-chain-Monte-Carlo methodology. Our results clearly indicate that considering data collected during a forced infiltration test helps to better refine soil hydraulic properties compared to data collected under natural infiltration conditions
Resumo:
An active, solvent-free solid sampler was developed for the collection of 1,6-hexamethylene diisocyanate (HDI) aerosol and prepolymers. The sampler was made of a filter impregnated with 1-(2-methoxyphenyl)piperazine contained in a filter holder. Interferences with HDI were observed when a set of cellulose acetate filters and a polystyrene filter holder were used; a glass fiber filter and polypropylene filter cassette gave better results. The applicability of the sampling and analytical procedure was validated with a test chamber, constructed for the dynamic generation of HDI aerosol and prepolymers in commercial two-component spray paints (Desmodur(R) N75) used in car refinishing. The particle size distribution, temporal stability, and spatial uniformity of the simulated aerosol were established in order to test the sample. The monitoring of aerosol concentrations was conducted with the solid sampler paired to the reference impinger technique (impinger flasks contained 10 mL of 0.5 mg/mL 1-(2-methoxyphenyl)piperazine in toluene) under a controlled atmosphere in the test chamber. Analyses of derivatized HDI and prepolymers were carried out by using high-performance liquid chromatography and ultraviolet detection. The correlation between the solvent-free and the impinger techniques appeared fairly good (Y = 0.979X - 0.161; R = 0.978), when the tests were conducted in the range of 0.1 to 10 times the threshold limit value (TLV) for HDI monomer and up to 60-mu-g/m3 (3 U.K. TLVs) for total -N = C = O groups.
Resumo:
The comparison of consecutively manufactured tools and firearms has provided much, but not all, of the basis for the profession of firearm and toolmark examination. The authors accept the fundamental soundness of this approach but appeal to the experimental community to close two minor gaps in the experimental procedure. We suggest that "blinding" and attention to appropriateness of other experimental conditions that would consolidate the foundations of our profession. We do not suggest that previous work is unsound.
Resumo:
Objectives: Skin can be partially regenerated after full thickness defects by collagen matrices, In this study, we identified the main limitations of induced regeneration aiming to improve the design of dermal matrices. Methods: Single mice received a 1 cm2, full thickness skin wound on the dorsum, which were grafted with collagen-GAG matrices or left ungrafted. The healing modulation induced by the collagen-GAG matrices was compared to spontaneous healing and to custom designed, bioactive, poly-N-Acetyl- Glucosamine (NAG) matrices. Wound staging was based on macroscopic, histological and immunhistochemical analysis on days 3, 7, 10 and 21 post wounding. Results: Cell density was higher in spontaneously granulating wounds compared to grafted wounds. While grafted wounds exhibited increased levels of cell proliferation on days 7 and 10, vascularity was dramatically reduced. NAG scaffolds accelerated both angiogenesis and wound re-epithelialization. Conclusions: Since slow integration and revascularization severely limit the engraftment of clinically used dermal scaffolds, the design of dermal matrices using bioactive materials represent the next step in skin regeneration.
Resumo:
ABSTRACT: BACKGROUND: There is no recommendation to screen ferritin level in blood donors, even though several studies have noted the high prevalence of iron deficiency after blood donation, particularly among menstruating females. Furthermore, some clinical trials have shown that non-anaemic women with unexplained fatigue may benefit from iron supplementation. Our objective is to determine the clinical effect of iron supplementation on fatigue in female blood donors without anaemia, but with a mean serum ferritin </= 30 ng/ml. METHODS/DESIGN: In a double blind randomised controlled trial, we will measure blood count and ferritin level of women under age 50 yr, who donate blood to the University Hospital of Lausanne Blood Transfusion Department, at the time of the donation and after 1 week. One hundred and forty donors with a ferritin level </= 30 ng/ml and haemoglobin level >/= 120 g/l (non-anaemic) a week after the donation will be included in the study and randomised. A one-month course of oral ferrous sulphate (80 mg/day of elemental iron) will be introduced vs. placebo. Self-reported fatigue will be measured using a visual analogue scale. Secondary outcomes are: score of fatigue (Fatigue Severity Scale), maximal aerobic power (Chester Step Test), quality of life (SF-12), and mood disorders (Prime-MD). Haemoglobin and ferritin concentration will be monitored before and after the intervention. DISCUSSION: Iron deficiency is a potential problem for all blood donors, especially menstruating women. To our knowledge, no other intervention study has yet evaluated the impact of iron supplementation on subjective symptoms after a blood donation. TRIAL REGISTRATION: NCT00689793.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
The epidemiological methods have become useful tools for the assessment of the effectiveness and safety of health care technologies. The experimental methods, namely the randomized controlled trials (RCT), give the best evidence of the effect of a technology. However, the ethical issues and the very nature of the intervention under study sometimes make it difficult to carry out an RCT. Therefore, quasi-experimental and non-experimental study designs are also applied. The critical issues concerning these designs are discussed. The results of evaluative studies are of importance for decision-makers in health policy. The measurements of the impact of a medical technology should go beyond a statement of its effectiveness, because the essential outcome of an intervention or programme is the health status and quality of life of the individuals and populations concerned.
Resumo:
ABSTRACT The drug discovery process has been profoundly changed recently by the adoption of computational methods helping the design of new drug candidates more rapidly and at lower costs. In silico drug design consists of a collection of tools helping to make rational decisions at the different steps of the drug discovery process, such as the identification of a biomolecular target of therapeutical interest, the selection or the design of new lead compounds and their modification to obtain better affinities, as well as pharmacokinetic and pharmacodynamic properties. Among the different tools available, a particular emphasis is placed in this review on molecular docking, virtual high throughput screening and fragment-based ligand design.
Resumo:
Failure to detect a species in an area where it is present is a major source of error in biological surveys. We assessed whether it is possible to optimize single-visit biological monitoring surveys of highly dynamic freshwater ecosystems by framing them a priori within a particular period of time. Alternatively, we also searched for the optimal number of visits and when they should be conducted. We developed single-species occupancy models to estimate the monthly probability of detection of pond-breeding amphibians during a four-year monitoring program. Our results revealed that detection probability was species-specific and changed among sampling visits within a breeding season and also among breeding seasons. Thereby, the optimization of biological surveys with minimal survey effort (a single visit) is not feasible as it proves impossible to select a priori an adequate sampling period that remains robust across years. Alternatively, a two-survey combination at the beginning of the sampling season yielded optimal results and constituted an acceptable compromise between sampling efficacy and survey effort. Our study provides evidence of the variability and uncertainty that likely affects the efficacy of monitoring surveys, highlighting the need of repeated sampling in both ecological studies and conservation management.
Resumo:
The use of artificial nest-boxes has led to significant progress in bird conservation and in our understanding of the functional and evolutionary ecology of free-ranging birds that exploit cavities for roosting and reproduction. Nest-boxes and their improved accessibility have made it easier to perform comparative and experimental field investigations. However, concerns about the generality and applicability of scientific studies involving birds breeding in nest-boxes have been raised because the occupants of boxes may differ from conspecifics occupying other nest sites. Here we review the existing evidence demonstrating the importance of nest-box design to individual life-history traits in three falcon (Falconiformes) and seven owl (Strigiformes) species, as well as the extent to which publications on these birds describe the characteristics of exploited artificial nest-boxes in their 'methods' sections. More than 60% of recent publications did not provide any details on nest-box design (e.g. size, shape, material), despite several calls >15 years ago to increase the reporting of such information. We exemplify and discuss how variation in nest-box characteristics can affect or confound conclusions from nest-box studies and conclude that it is of overall importance to present details of nest-box characteristics in scientific publications.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Design of a Control Slide for Cyanoacrylate Polymerization : Application to the CA-Bluestar Sequence
Resumo:
Casework expercience has shown that, in some cases, long exposures of surfaces subjected to cyanoacrylate (CA) fuming had detrimental effects on the subsequent application of Bluestar. This study aimed to develop a control mechanism to monitor the amount of CA deposited prior to the subsequent treatment. A control slide bearing spots of sodium hydroxide (NaOH) of known concentrations and volume was designed and validated against both scanning electron microscopy (SEM) observations and latent print examiners' assessments of the quality of the developed marks. The control slide allows one to define three levels of development that were used to monitor the Bluestar reaction on depleting footwear marks left in diluted blood. The appropriate conditions for a successful application of both CA and Bluestar were determined.