978 resultados para sampling techniques
Resumo:
Designing an efficient sampling strategy is of crucial importance for habitat suitability modelling. This paper compares four such strategies, namely, 'random', 'regular', 'proportional-stratified' and 'equal -stratified'- to investigate (1) how they affect prediction accuracy and (2) how sensitive they are to sample size. In order to compare them, a virtual species approach (Ecol. Model. 145 (2001) 111) in a real landscape, based on reliable data, was chosen. The distribution of the virtual species was sampled 300 times using each of the four strategies in four sample sizes. The sampled data were then fed into a GLM to make two types of prediction: (1) habitat suitability and (2) presence/ absence. Comparing the predictions to the known distribution of the virtual species allows model accuracy to be assessed. Habitat suitability predictions were assessed by Pearson's correlation coefficient and presence/absence predictions by Cohen's K agreement coefficient. The results show the 'regular' and 'equal-stratified' sampling strategies to be the most accurate and most robust. We propose the following characteristics to improve sample design: (1) increase sample size, (2) prefer systematic to random sampling and (3) include environmental information in the design'
Resumo:
Stimulated echoes are widely used for imaging functional tissue parameters such as diffusion coefficient, perfusion, and flow rates. They are potentially interesting for the assessment of various cardiac functions. However, severe limitations of the stimulated echo acquisition mode occur, which are related to the special dynamic properties of the beating heart and flowing blood. To the well-known signal decay due to longitudinal relaxation and through-plane motion between the preparation and the read-out period of the stimulated echoes, additional signal loss is often observed. As the prepared magnetization is fixed with respect to the tissue, this signal loss is caused by the tissue deformation during the cardiac cycle, which leads to a modification of the modulation frequency of the magnetization. These effects are theoretically derived and corroborated by phantom and in vivo experiments.
Resumo:
Over the last century, numerous techniques have been developed to analyze the movement of humans while walking and running. The combined use of kinematics and kinetics methods, mainly based on high speed video analysis and forceplate, have permitted a comprehensive description of locomotion process in terms of energetics and biomechanics. While the different phases of a single gait cycle are well understood, there is an increasing interest to know how the neuro-motor system controls gait form stride to stride. Indeed, it was observed that neurodegenerative diseases and aging could impact gait stability and gait parameters steadiness. From both clinical and fundamental research perspectives, there is therefore a need to develop techniques to accurately track gait parameters stride-by-stride over a long period with minimal constraints to patients. In this context, high accuracy satellite positioning can provide an alternative tool to monitor outdoor walking. Indeed, the high-end GPS receivers provide centimeter accuracy positioning with 5-20 Hz sampling rate: this allows the stride-by-stride assessment of a number of basic gait parameters--such as walking speed, step length and step frequency--that can be tracked over several thousand consecutive strides in free-living conditions. Furthermore, long-range correlations and fractal-like pattern was observed in those time series. As compared to other classical methods, GPS seems a promising technology in the field of gait variability analysis. However, relative high complexity and expensiveness--combined with a usability which requires further improvement--remain obstacles to the full development of the GPS technology in human applications.
Resumo:
Résumé La protéomique basée sur la spectrométrie de masse est l'étude du proteome l'ensemble des protéines exprimées au sein d'une cellule, d'un tissu ou d'un organisme - par cette technique. Les protéines sont coupées à l'aide d'enzymes en plus petits morceaux -les peptides -, et, séparées par différentes techniques. Les différentes fractions contenant quelques centaines de peptides sont ensuite analysées dans un spectromètre de masse. La masse des peptides est enregistrée et chaque peptide est séquentiellement fragmenté pour en obtenir sa séquence. L'information de masse et séquence est ensuite comparée à une base de données de protéines afin d'identifier la protéine d'origine. Dans une première partie, la thèse décrit le développement de méthodes d'identification. Elle montre l'importance de l'enrichissement de protéines comme moyen d'accès à des protéines de moyenne à faible abondance dans le lait humain. Elle utilise des injections répétées pour augmenter la couverture en protéines et la confiance dans l'identification. L'impacte de nouvelle version de base de données sur la liste des protéines identifiées est aussi démontré. De plus, elle utilise avec succès la spectrométrie de masse comme alternative aux anticorps, pour valider la présence de 34 constructions de protéines pathogéniques du staphylocoque doré exprimées dans une souche de lactocoque. Dans une deuxième partie, la thèse décrit le développement de méthodes de quantification. Elle expose de nouvelles approches de marquage des terminus des protéines aux isotopes stables et décrit la première méthode de marquage des groupements carboxyliques au niveau protéine à l'aide de réactifs composé de carbone 13. De plus, une nouvelle méthode, appelée ANIBAL, marquant tous les groupements amines et carboxyliques au niveau de la protéine, est exposée. Summary Mass spectrometry-based proteomics is the study of the proteome -the set of all expressed proteins in a cell, tissue or organism -using mass spectrometry. Proteins are cut into smaller pieces - peptides - using proteolytic enzymes and separated using different separation techniques. The different fractions containing several hundreds of peptides are than analyzed by mass spectrometry. The mass of the peptides entering the instrument are recorded and each peptide is sequentially fragmented to obtain its amino acid sequence. Each peptide sequence with its corresponding mass is then searched against a protein database to identify the protein to which it belongs. This thesis presents new method developments in this field. In a first part, the thesis describes development of identification methods. It shows the importance of protein enrichment methods to gain access to medium-to-low abundant proteins in a human milk sample. It uses repeated injection to increase protein coverage and confidence in identification and demonstrates the impact of new database releases on protein identification lists. In addition, it successfully uses mass spectrometry as an alternative to antibody-based assays to validate the presence of 34 different recombinant constructs of Staphylococcus aureus pathogenic proteins expressed in a Lactococcus lactis strain. In a second part, development of quantification methods is described. It shows new stable isotope labeling approaches based on N- and C-terminus labeling of proteins and describes the first method of labeling of carboxylic groups at the protein level using 13C stable isotopes. In addition, a new quantitative approach called ANIBAL is explained that labels all amino and carboxylic groups at the protein level.
Resumo:
Volumetric soil water content (theta) can be evaluated in the field by direct or indirect methods. Among the direct, the gravimetric method is regarded as highly reliable and thus often preferred. Its main disadvantages are that sampling and laboratory procedures are labor intensive, and that the method is destructive, which makes resampling of a same point impossible. Recently, the time domain reflectometry (TDR) technique has become a widely used indirect, non-destructive method to evaluate theta. In this study, evaluations of the apparent dielectric number of soils (epsilon) and samplings for the gravimetrical determination of the volumetric soil water content (thetaGrav) were carried out at four sites of a Xanthic Ferralsol in Manaus - Brazil. With the obtained epsilon values, theta was estimated using empirical equations (thetaTDR), and compared with thetaGrav derived from disturbed and undisturbed samples. The main objective of this study was the comparison of thetaTDR estimates of horizontally as well as vertically inserted probes with the thetaGrav values determined by disturbed and undisturbed samples. Results showed that thetaTDR estimates of vertically inserted probes and the average of horizontally measured layers were only slightly and insignificantly different. However, significant differences were found between the thetaTDR estimates of different equations and between disturbed and undisturbed samples in the thetaGrav determinations. The use of the theoretical Knight et al. model, which permits an evaluation of the soil volume assessed by TDR probes, is also discussed. It was concluded that the TDR technique, when properly calibrated, permits in situ, nondestructive measurements of q in Xanthic Ferralsols of similar accuracy as the gravimetric method.
Resumo:
The non-invasive evaluation of myocardial ischemia is a priority in cardiology. The preferred initial non-invasive test is exercise ECG, because of its high accessibility and its low cost. Stress radionuclide myocardial perfusion imaging or stress echocardiography are now routinely performed, and new non-invasive techniques such as perfusion-MRI, dobutamine stress-MRI or 82rubidium perfusion PET have recently gained acceptance in clinical practice. In the same time, an increasing attention has been accorded to the concept of myocardial viability in the decisional processes in case of ischemic heart failure. In this indication, MRI with late enhancement after intravenous injection of gadolinium and 18F-FDG PET showed an excellent diagnostic accuracy. This article will present these new imaging modalities and their accepted indications.
Resumo:
The current operational very short-term and short-term quantitative precipitation forecast (QPF) at the Meteorological Service of Catalonia (SMC) is made by three different methodologies: Advection of the radar reflectivity field (ADV), Identification, tracking and forecasting of convective structures (CST) and numerical weather prediction (NWP) models using observational data assimilation (radar, satellite, etc.). These precipitation forecasts have different characteristics, lead time and spatial resolutions. The objective of this study is to combine these methods in order to obtain a single and optimized QPF at each lead time. This combination (blending) of the radar forecast (ADV and CST) and precipitation forecast from NWP model is carried out by means of different methodologies according to the prediction horizon. Firstly, in order to take advantage of the rainfall location and intensity from radar observations, a phase correction technique is applied to the NWP output to derive an additional corrected forecast (MCO). To select the best precipitation estimation in the first and second hour (t+1 h and t+2 h), the information from radar advection (ADV) and the corrected outputs from the model (MCO) are mixed by using different weights, which vary dynamically, according to indexes that quantify the quality of these predictions. This procedure has the ability to integrate the skill of rainfall location and patterns that are given by the advection of radar reflectivity field with the capacity of generating new precipitation areas from the NWP models. From the third hour (t+3 h), as radar-based forecasting has generally low skills, only the quantitative precipitation forecast from model is used. This blending of different sources of prediction is verified for different types of episodes (convective, moderately convective and stratiform) to obtain a robust methodology for implementing it in an operational and dynamic way.
Resumo:
Chloride channels represent a group of targets for major clinical indications. However, molecular screening for chloride channel modulators has proven to be difficult and time-consuming as approaches essentially rely on the use of fluorescent dyes or invasive patch-clamp techniques which do not lend themselves to the screening of large sets of compounds. To address this problem, we have developed a non-invasive optical method, based on digital holographic microcopy (DHM), allowing monitoring of ion channel activity without using any electrode or fluorescent dye. To illustrate this approach, GABA(A) mediated chloride currents have been monitored with DHM. Practically, we show that DHM can non-invasively provide the quantitative determination of transmembrane chloride fluxes mediated by the activation of chloride channels associated with GABA(A) receptors. Indeed through an original algorithm, chloride currents elicited by application of appropriate agonists of the GABA(A) receptor can be derived from the quantitative phase signal recorded with DHM. Finally, chloride currents can be determined and pharmacologically characterized non-invasively simultaneously on a large cellular sampling by DHM.
Resumo:
To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) < F(ST) is predicted under homogenizing selection. However, nonadditive effects can alter these predictions. Here, we investigate the influence of dominance on the relation between Q(ST) and F(ST) for neutral traits. Using analytical results and computer simulations, we show that dominance generally deflates Q(ST) relative to F(ST). Under inbreeding, the effect of dominance vanishes, and we show that for selfing species, a better estimate of Q(ST) is obtained from selfed families than from half-sib families. We also compare several sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.
Resumo:
In arson cases, the collection and detection of traces of ignitable liquids on a suspect's hands can provide information to a forensic investigation. Police forces currently lack a simple, robust, efficient and reliable solution to perform this type of swabbing. In this article, we describe a study undertaken to develop a procedure for the collection of ignitable liquid residues on the hands of arson suspects. Sixteen different collection supports were considered and their applicability for the collection of gasoline traces present on hands and their subsequent analysis in a laboratory was evaluated. Background contamination, consisting of volatiles emanating from the collection supports, and collection efficiencies of the different sampling materials were assessed by passive headspace extraction with an activated charcoal strip (DFLEX device) followed by gas chromatography-mass spectrometry (GC-MS) analysis. After statistical treatment of the results, non-powdered latex gloves were retained as the most suitable method of sampling. On the basis of the obtained results, a prototype sampling kit was designed and tested. This kit is made of a three compartment multilayer bag enclosed in a sealed metal can and containing three pairs of non-powdered latex gloves: one to be worn by the sampler, one consisting of a blank sample and the last one to be worn by the person suspected to have been in contact with ignitable liquids. The design of the kit was developed to be efficient in preventing external and cross-contaminations.
Resumo:
The new techniques proposed for agriculture in the Amazon region include rotational fallow systems enriched with leguminous trees and the replacement of biomass burning by mulching. Decomposition and nutrient release from mulch were studied using fine-mesh litterbags with five different leguminous species and the natural fallow vegetation as control. Samples from each treatment were analyzed for total C, N, P, K, Ca, Mg, lignin, cellulose content and soluble polyphenol at different sampling times over the course of one year. The decomposition rate constant varied with species and time. Weight loss from the decomposed litter bag material after 96 days was 30.1 % for Acacia angustissima, 32.7 % for Sclerolobium paniculatum, 33.9 % for Iinga edulis and the Fallow vegetation, 45.2 % for Acacia mangium and 63.6 % for Clitoria racemosa. Immobilization of N and P was observed in all studied treatments. Nitrogen mineralization was negatively correlated with phenol, C-to-N ratio, lignin + phenol/N ratio, and phenol/phosphorus ratios and with N content in the litterbag material. After 362 days of field incubation, an average (of all treatments), 3.3 % K, 32.2 % Ca and 22.4 % Mg remained in the mulch. Results confirm that low quality and high amount of organic C as mulch application are limiting for the quantity of energy available for microorganisms and increase the nutrient immobilization for biomass decomposition, which results in competition for nutrients with the crop plants.