967 resultados para Bayesian hypothesis testing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Twelve primers to amplify microsatellite markers from the chloroplast genome of Lolium perenne were designed and optimized using de novo sequencing and in silico sequences. With one exception, each locus was polymorphic with a range from two to nine alleles in L. perenne. The newly developed primer pairs cross-amplified in different species of Lolium and in 50 other grass species representing nine grass subfamilies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive necessary and sufficient conditions under which a set of variables is informationally sufficient, i.e. it contains enough information to estimate the structural shocks with a VAR model. Based on such conditions, we suggest a procedure to test for informational sufficiency. Moreover, we show how to amend the VAR if informational sufficiency is rejected. We apply our procedure to a VAR including TFP, unemployment and per-capita hours worked. We find that the three variables are not informationally sufficient. When adding missing information, the effects of technology shocks change dramatically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been a long debate since the introduction of blood analysis prior to major sports events, to find out whether blood samples should be analysed right away on the site of competition or whether they should be transported and analysed in an anti-doping laboratory. Therefore, it was necessary to measure blood samples and compare the results obtained right after the blood withdrawal with those obtained after a few hours delay. Furthermore, it was interesting to determine the effect of temperature on the possible deterioration of red blood cell analytes used for testing recombinant erythropoietin abuse. Healthy volunteers were asked to give two blood samples and one of these was kept at room temperature whereas the second one was put into a refrigerator. On a regular basis, the samples were rolled for homogenisation and temperature stabilisation and were analysed with the same haematological apparatus. The results confirmed that blood controls prior to competition should be performed as soon as possible with standardised pre-analytical conditions to avoid too many variations notably on the haematocrit and the reticulocyte count. These recommendations should ideally also be applied to the all the blood controls compulsory for the medical follow up, otherwise unexplainable values could be misinterpreted and could for instance lead to a period of incapacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Half-lives of radionuclides span more than 50 orders of magnitude. We characterize the probability distribution of this broad-range data set at the same time that explore a method for fitting power-laws and testing goodness-of-fit. It is found that the procedure proposed recently by Clauset et al. [SIAM Rev. 51, 661 (2009)] does not perform well as it rejects the power-law hypothesis even for power-law synthetic data. In contrast, we establish the existence of a power-law exponent with a value around 1.1 for the half-life density, which can be explained by the sharp relationship between decay rate and released energy, for different disintegration types. For the case of alpha emission, this relationship constitutes an original mechanism of power-law generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Per definition, alcohol expectancies (after alcohol I expect X), and drinking motives (I drink to achieve X) are conceptually distinct constructs. Theorists have argued that motives mediate the association between expectancies and drinking outcomes. Yet, given the use of different instruments, do these constructs remain distinct when assessment items are matched? The present study tested to what extent motives mediated the link between expectancies and alcohol outcomes when identical items were used, first as expectancies and then as motives. A linear structural equation model was estimated based on a national representative sample of 5,779 alcohol-using students in Switzerland (mean age = 15.2 years). The results showed that expectancies explained up to 38% of the variance in motives. Together with motives, they explained up to 48% of the variance in alcohol outcomes (volume, 5+ drinking, and problems). In 10 of 12 outcomes, there was a significant mediated effect that was often higher than the direct expectancy effect. For coping, the expectancy effect was close to zero, indicating the strongest form of mediation. In only one case (conformity and 5+ drinking), there was a direct expectancy effect but no mediation. To conclude, the study demonstrates that motives are distinct from expectancies even when identical items are used. Motives are more proximally related to different alcohol outcomes, often mediating the effects of expectancies. Consequently, the effectiveness of interventions, particularly those aimed at coping drinkers, should be improved through a shift in focus from expectancies to drinking motives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El objetivo del proyecto es el desarrollo de una herramienta de trabajo para un departamento de Calidad. A través de ella, se deben poder ejecutar unos test automatizados sobre unas funcionalidades que tiene la aplicación Logic Class: el Cálculo de Nómina y Seguros Sociales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: We report on the in vivo testing of a novel noninvasively adjustable glaucoma drainage device (AGDD), which features an adjustable outflow resistance, and assess the safety and efficiency of this implant. METHODS: Under general anesthesia, the AGDD was implanted on seven white New Zealand rabbits for a duration of 4 months under a scleral flap in a way analogous to the Ex-PRESS device and set in an operationally closed position. The IOP was measured on a regular basis on the operated and control eyes using a rebound tonometer. Once a month the AGDD was adjusted noninvasively from its fully closed to its fully open position and the resulting pressure drop was measured. The contralateral eye was not operated and served as control. After euthanization, the eyes were collected for histology evaluation. RESULTS: The mean preoperative IOP was 11.1 ± 2.4 mm Hg. The IOP was significantly lower for the operated eye (6.8 ± 2 mm Hg) compared to the nonoperated eye (13.1 ± 1.6 mm Hg) during the first 8 days after surgery. When opening the AGDD from its fully closed to fully open position, the IOP dropped significantly from 11.2 ± 2.9 to 4.8 ± 0.9 mm Hg (P < 0.05). CONCLUSIONS: Implanting the AGDD is a safe and uncomplicated surgical procedure. The fluidic resistance was noninvasively adjustable during the postoperative period with the AGDD between its fully closed and fully open positions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reliance in experimental psychology on testing undergraduate populations with relatively little life experience, and/or ambiguously valenced stimuli with varying degrees of self-relevance, may have contributed to inconsistent findings in the literature on the valence hypothesis. To control for these potential limitations, the current study assessed lateralised lexical decisions for positive and negative attachment words in 40 middle-aged male and female participants. Self-relevance was manipulated in two ways: by testing currently married compared with previously married individuals and by assessing self-relevance ratings individually for each word. Results replicated a left hemisphere advantage for lexical decisions and a processing advantage of emotional over neutral words but did not support the valence hypothesis. Positive attachment words yielded a processing advantage over neutral words in the right hemisphere, while emotional words (irrespective of valence) yielded a processing advantage over neutral words in the left hemisphere. Both self-relevance manipulations were unrelated to lateralised performance. The role of participant sex and age in emotion processing are discussed as potential modulators of the present findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The hydrogen isotope ratio (HIR) of body water and, therefore, of all endogenously synthesized compounds in humans, is mainly affected by the HIR of ingested drinking water. As a consequence, the entire organism and all of its synthesized substrates will reflect alterations in the isotope ratio of drinking water, which depends on the duration of exposure. To investigate the effect of this change on endogenous urinary steroids relevant to doping-control analysis the hydrogen isotope composition of potable water was suddenly enriched from -50 to 200 0/00 and maintained at this level for two weeks for two individuals. The steroids under investigation were 5β-pregnane-3α,20α-diol, 5α-androst-16-en-3α-ol, 3α-hydroxy-5α-androstan-17-one (ANDRO), 3α-hydroxy-5β-androstan-17-one (ETIO), 5α-androstane-3α,17β-diol, and 5β-androstane-3α,17β-diol (excreted as glucuronides) and ETIO, ANDRO and 3β-hydroxyandrost-5-en-17-one (excreted as sulfates). The HIR of body water was estimated by determination of the HIR of total native urine, to trace the induced changes. The hydrogen in steroids is partly derived from the total amount of body water and cholesterol-enrichment could be calculated by use of these data. Although the sum of changes in the isotopic composition of body water was 150 0/00, shifts of approximately 30 0/00 were observed for urinary steroids. Parallel enrichment in their HIR was observed for most of the steroids, and none of the differences between the HIR of individual steroids was elevated beyond recently established thresholds. This finding is important to sports drug testing because it supports the intended use of this novel and complementary methodology even in cases where athletes have drunk water of different HIR, a plausible and, presumably, inevitable scenario while traveling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From March 1996 to August 1997, a study was carried out in a malaria endemic area of the Brazilian Amazon region. In vivo sensitivity evaluation to antimalarial drugs was performed in 129 patients. Blood samples (0.5 ml) were drawn from each patient and cryopreserved to proceed to in vitro studies. In vitro sensitivity evaluation performed using a radioisotope method was carried out with the cryopreserved samples from September to December 1997. Thirty-one samples were tested for chloroquine, mefloquine, halofantrine, quinine, arteether and atovaquone. Resistance was evidenced in 96.6% (29/30) of the samples tested for chloroquine, 3.3% (1/30) for quinine, none (0/30) for mefloquine and none for halofantrine (0/30). Overall low sensitivity was evidenced in 10% of the samples tested for quinine, 22.5% tested for halofantrine and in 20% tested for mefloquine. Means of IC 50 values were 132.2 (SD: 46.5) ng/ml for chloroquine, 130.6 (SD: 49.6) ng/ml for quinine, 3.4 (SD: 1.3) ng/ml for mefloquine, 0.7 (SD: 0.3) ng/ml for halofantrine, 1 (SD: 0.6) ng/ml for arteether and 0.4 (SD: 0.2) ng/ml for atovaquone. Means of chloroquine IC 50 of the tested samples were comparable to that of the chloroquine-resistant strain W2 (137.57 ng/ml) and nearly nine times higher than that of the chloroquine-sensitive strain D6 (15.09 ng/ml). Means of quinine IC 50 of the tested samples were 1.7 times higher than that of the low sensitivity strain W2 (74.84 ng/ml) and nearly five times higher than that of the quinine-sensitive strain D6 (27.53 ng/ml). These results disclose in vitro high resistance levels to chloroquine, low sensitivity to quinine and evidence of decreasing sensitivity to mefloquine and halofantrine in the area under evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sex allocation data in eusocial Hymenoptera (ants, bees and wasps) provide an excellent opportunity to assess the effectiveness of kin selection, because queens and workers differ in their relatedness to females and males. The first studies on sex allocation in eusocial Hymenoptera compared population sex investment ratios across species. Female-biased investment in monogyne (= with single-queen colonies) populations of ants suggested that workers manipulate sex allocation according to their higher relatedness to females than males (relatedness asymmetry). However, several factors may confound these comparisons across species. First, variation in relatedness asymmetry is typically associated with major changes in breeding system and life history that may also affect sex allocation. Secondly, the relative cost of females and males is difficult to estimate across sexually dimorphic taxa, such as ants. Thirdly, each species in the comparison may not represent an independent data point, because of phylogenetic relationships among species. Recently, stronger evidence that workers control sex allocation has been provided by intraspecific studies of sex ratio variation across colonies. In several species of eusocial Hymenoptera, colonies with high relatedness asymmetry produced mostly females, in contrast to colonies with low relatedness asymmetry which produced mostly males. Additional signs of worker control were found by investigating proximate mechanisms of sex ratio manipulation in ants and wasps. However, worker control is not always effective, and further manipulative experiments will be needed to disentangle the multiple evolutionary factors and processes affecting sex allocation in eusocial Hymenoptera.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resume : L'utilisation de l'encre comme indice en sciences forensiques est décrite et encadrée par une littérature abondante, comprenant entre autres deux standards de l'American Society for Testing and Materials (ASTM). La grande majorité de cette littérature se préoccupe de l'analyse des caractéristiques physiques ou chimiques des encres. Les standards ASTM proposent quelques principes de base qui concernent la comparaison et l'interprétation de la valeur d'indice des encres en sciences forensiques. L'étude de cette littérature et plus particulièrement des standards ASTM, en ayant a l'esprit les développements intervenus dans le domaine de l'interprétation de l'indice forensique, montre qu'il existe un potentiel certain pour l'amélioration de l'utilisation de l'indice encre et de son impact dans l'enquête criminelle. Cette thèse propose d'interpréter l'indice encre en se basant sur le cadre défini par le théorème de Bayes. Cette proposition a nécessité le développement d'un système d'assurance qualité pour l'analyse et la comparaison d'échantillons d'encre. Ce système d'assurance qualité tire parti d'un cadre théorique nouvellement défini. La méthodologie qui est proposée dans ce travail a été testée de manière compréhensive, en tirant parti d'un set de données spécialement créer pour l'occasion et d'outils importés de la biométrie. Cette recherche répond de manière convaincante à un problème concret généralement rencontré en sciences forensiques. L'information fournie par le criminaliste, lors de l'examen de traces, est souvent bridée, car celui-ci essaie de répondre à la mauvaise question. L'utilisation d'un cadre théorique explicite qui définit et formalise le goal de l'examen criminaliste, permet de déterminer les besoins technologiques et en matière de données. Le développement de cette technologie et la collection des données pertinentes peut être justifiées économiquement et achevée de manière scientifique. Abstract : The contribution of ink evidence to forensic science is described and supported by an abundant literature and by two standards from the American Society for Testing and Materials (ASTM). The vast majority of the available literature is concerned with the physical and chemical analysis of ink evidence. The relevant ASTM standards mention some principles regarding the comparison of pairs of ink samples and the evaluation of their evidential value. The review of this literature and, more specifically, of the ASTM standards in the light of recent developments in the interpretation of forensic evidence has shown some potential improvements, which would maximise the benefits of the use of ink evidence in forensic science. This thesis proposes to interpret ink evidence using the widely accepted and recommended Bayesian theorem. This proposition has required the development of a new quality assurance process for the analysis and comparison of ink samples, as well as of the definition of a theoretical framework for ink evidence. The proposed technology has been extensively tested using a large dataset of ink samples and state of the art tools, commonly used in biometry. Overall, this research successfully answers to a concrete problem generally encountered in forensic science, where scientists tend to self-limit the usefulness of the information that is present in various types of evidence, by trying to answer to the wrong questions. The declaration of an explicit framework, which defines and formalises their goals and expected contributions to the criminal and civil justice system, enables the determination of their needs in terms of technology and data. The development of this technology and the collection of the data is then justified economically, structured scientifically and can be proceeded efficiently.