14 resultados para 1 sigma error

em Université de Lausanne, Switzerland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Major and trace element compositions, stable H and 0 isotope compositions and Fe 31 contents of amphibole megacrysts of Pliocene-Pleistocene alkaline basalts have been investigated to obtain information on the origin of mantle fluids beneath the Carpathian-Pannonian region. The megacrysts have been regarded as igneous cumulates formed in the mantle and brought to the surface by the basaltic magma. The studied amphiboles have oxygen isotope compositions (5.4 +/- 0.2 %., 1 sigma), supporting their primary mantle origin. Even within the small 6180 variation observed, correlations with major and trace elements are detected. The negative delta(18)O-MgO and the positive delta(18)O-La/Sm(N) correlations are interpreted to have resulted from varying degrees of partial melting. The halogen (F, Cl) contents are very low (< 0.1 wt. %), however, a firm negative (F+Cl)-MgO correlation (R(2) = 0.84) can be related to the Mg-Cl avoidance in the amphibole structure. The relationships between water contents, H isotope compositions and Fe 31 contents of the amphibole megacrysts revealed degassing. Selected undegassed amphibole megacrysts show a wide 813 range from -80 to -20 parts per thousand. The low delta D value is characteristic of the normal mantle, whereas the high delta D values may indicate the influence of fluids released from subducted oceanic crust. The chemical and isotopic evidence collectively suggest that formation of the amphibole megacrysts is related to fluid metasomatism, whereas direct melt addition is insignificant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Swain corrects the chi-square overidentification test (i.e., likelihood ratio test of fit) for structural equation models whethr with or without latent variables. The chi-square statistic is asymptotically correct; however, it does not behave as expected in small samples and/or when the model is complex (cf. Herzog, Boomsma, & Reinecke, 2007). Thus, particularly in situations where the ratio of sample size (n) to the number of parameters estimated (p) is relatively small (i.e., the p to n ratio is large), the chi-square test will tend to overreject correctly specified models. To obtain a closer approximation to the distribution of the chi-square statistic, Swain (1975) developed a correction; this scaling factor, which converges to 1 asymptotically, is multiplied with the chi-square statistic. The correction better approximates the chi-square distribution resulting in more appropriate Type 1 reject error rates (see Herzog & Boomsma, 2009; Herzog, et al., 2007).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Whole-body counting is a technique of choice for assessing the intake of gamma-emitting radionuclides. An appropriate calibration is necessary, which is done either by experimental measurement or by Monte Carlo (MC) calculation. The aim of this work was to validate a MC model for calibrating whole-body counters (WBCs) by comparing the results of computations with measurements performed on an anthropomorphic phantom and to investigate the effect of a change in phantom's position on the WBC counting sensitivity. GEANT MC code was used for the calculations, and an IGOR phantom loaded with several types of radionuclides was used for the experimental measurements. The results show a reasonable agreement between measurements and MC computation. A 1-cm error in phantom positioning changes the activity estimation by >2%. Considering that a 5-cm deviation of the positioning of the phantom may occur in a realistic counting scenario, this implies that the uncertainty of the activity measured by a WBC is ∼10-20%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cerebral metabolism is compartmentalized between neurons and glia. Although glial glycolysis is thought to largely sustain the energetic requirements of neurotransmission while oxidative metabolism takes place mainly in neurons, this hypothesis is matter of debate. The compartmentalization of cerebral metabolic fluxes can be determined by (13)C nuclear magnetic resonance (NMR) spectroscopy upon infusion of (13)C-enriched compounds, especially glucose. Rats under light α-chloralose anesthesia were infused with [1,6-(13)C]glucose and (13)C enrichment in the brain metabolites was measured by (13)C NMR spectroscopy with high sensitivity and spectral resolution at 14.1 T. This allowed determining (13)C enrichment curves of amino acid carbons with high reproducibility and to reliably estimate cerebral metabolic fluxes (mean error of 8%). We further found that TCA cycle intermediates are not required for flux determination in mathematical models of brain metabolism. Neuronal tricarboxylic acid cycle rate (V(TCA)) and neurotransmission rate (V(NT)) were 0.45̴1;±̴1;0.01 and 0.11̴1;±̴1;0.01̴1;μmol/g/min, respectively. Glial V(TCA) was found to be 38̴1;±̴1;3% of total cerebral oxidative metabolism, accounting for more than half of neuronal oxidative metabolism. Furthermore, glial anaplerotic pyruvate carboxylation rate (V(PC)) was 0.069̴1;±̴1;0.004̴1;μmol/g/min, i.e., 25̴1;±̴1;1% of the glial TCA cycle rate. These results support a role of glial cells as active partners of neurons during synaptic transmission beyond glycolytic metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Pseudomonas fluorescens biocontrol strain CHA0, the two-component system GacS/GacA positively controls the synthesis of extracellular products such as hydrogen cyanide, protease, and 2,4-diacetylphloroglucinol, by upregulating the transcription of small regulatory RNAs which relieve RsmA-mediated translational repression of target genes. The expression of the stress sigma factor sigmaS (RpoS) was controlled positively by GacA and negatively by RsmA. By comparison with the wild-type CHA0, both a gacS and an rpoS null mutant were more sensitive to H2O2 in stationary phase. Overexpression of rpoS or of rsmZ, encoding a small RNA antagonistic to RsmA, restored peroxide resistance to a gacS mutant. By contrast, the rpoS mutant showed a slight increase in the expression of the hcnA (HCN synthase subunit) gene and of the aprA (major exoprotease) gene, whereas overexpression of sigmaS strongly reduced the expression of these genes. These results suggest that in strain CHA0, regulation of exoproduct synthesis does not involve sigmaS as an intermediate in the Gac/Rsm signal transduction pathway whereas sigmaS participates in Gac/Rsm-mediated resistance to oxidative stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this paper the iterative MSFV method is extended to include the sequential implicit simulation of time dependent problems involving the solution of a system of pressure-saturation equations. To control numerical errors in simulation results, an error estimate, based on the residual of the MSFV approximate pressure field, is introduced. In the initial time steps in simulation iterations are employed until a specified accuracy in pressure is achieved. This initial solution is then used to improve the localization assumption at later time steps. Additional iterations in pressure solution are employed only when the pressure residual becomes larger than a specified threshold value. Efficiency of the strategy and the error control criteria are numerically investigated. This paper also shows that it is possible to derive an a-priori estimate and control based on the allowed pressure-equation residual to guarantee the desired accuracy in saturation calculation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge of T(1) relaxation times can be important for accurate relative and absolute quantification of brain metabolites, for sensitivity optimizations, for characterizing molecular dynamics, and for studying changes induced by various pathological conditions. (1)H T(1) relaxation times of a series of brain metabolites, including J-coupled ones, were determined using a progressive saturation (PS) technique that was validated with an adiabatic inversion-recovery (IR) method. The (1)H T(1) relaxation times of 16 functional groups of the neurochemical profile were measured at 14.1T and 9.4T. Overall, the T(1) relaxation times found at 14.1T were, within the experimental error, identical to those at 9.4T. The T(1)s of some coupled spin resonances of the neurochemical profile were measured for the first time (e.g., those of gamma-aminobutyrate [GABA], aspartate [Asp], alanine [Ala], phosphoethanolamine [PE], glutathione [GSH], N-acetylaspartylglutamate [NAAG], and glutamine [Gln]). Our results suggest that T(1) does not increase substantially beyond 9.4T. Furthermore, the similarity of T(1) among the metabolites (approximately 1.5 s) suggests that T(1) relaxation time corrections for metabolite quantification are likely to be similar when using rapid pulsing conditions. We therefore conclude that the putative T(1) increase of metabolites has a minimal impact on sensitivity when increasing B(0) beyond 9.4T.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SASbur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, deltaGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC5o without reparametrization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real time glycemia is a cornerstone for metabolic research, particularly when performing oral glucose tolerance tests (OGTT) or glucose clamps. From 1965 to 2009, the gold standard device for real time plasma glucose assessment was the Beckman glucose analyzer 2 (Beckman Instruments, Fullerton, CA), which technology couples glucose oxidase enzymatic assay with oxygen sensors. Since its discontinuation in 2009, today's researchers are left with few choices that utilize glucose oxidase technology. The first one is the YSI 2300 (Yellow Springs Instruments Corp., Yellow Springs, OH), known to be as accurate as the Beckman(1). The YSI has been used extensively for clinical research studies and is used to validate other glucose monitoring devices(2). The major drawback of the YSI is that it is relatively slow and requires high maintenance. The Analox GM9 (Analox instruments, London), more recent and faster, is increasingly used in clinical research(3) as well as in basic sciences(4) (e.g. 23 papers in Diabetes or 21 in Diabetologia). This article is protected by copyright. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.