965 resultados para Bayesian p-values
Resumo:
Background: With nearly 1,100 species, the fish family Characidae represents more than half of the species of Characiformes, and is a key component of Neotropical freshwater ecosystems. The composition, phylogeny, and classification of Characidae is currently uncertain, despite significant efforts based on analysis of morphological and molecular data. No consensus about the monophyly of this group or its position within the order Characiformes has been reached, challenged by the fact that many key studies to date have non-overlapping taxonomic representation and focus only on subsets of this diversity. Results: In the present study we propose a new definition of the family Characidae and a hypothesis of relationships for the Characiformes based on phylogenetic analysis of DNA sequences of two mitochondrial and three nuclear genes (4,680 base pairs). The sequences were obtained from 211 samples representing 166 genera distributed among all 18 recognized families in the order Characiformes, all 14 recognized subfamilies in the Characidae, plus 56 of the genera so far considered incertae sedis in the Characidae. The phylogeny obtained is robust, with most lineages significantly supported by posterior probabilities in Bayesian analysis, and high bootstrap values from maximum likelihood and parsimony analyses. Conclusion: A monophyletic assemblage strongly supported in all our phylogenetic analysis is herein defined as the Characidae and includes the characiform species lacking a supraorbital bone and with a derived position of the emergence of the hyoid artery from the anterior ceratohyal. To recognize this and several other monophyletic groups within characiforms we propose changes in the limits of several families to facilitate future studies in the Characiformes and particularly the Characidae. This work presents a new phylogenetic framework for a speciose and morphologically diverse group of freshwater fishes of significant ecological and evolutionary importance across the Neotropics and portions of Africa.
Resumo:
Hardy-Weinberg Equilibrium (HWE) is an important genetic property that populations should have whenever they are not observing adverse situations as complete lack of panmixia, excess of mutations, excess of selection pressure, etc. HWE for decades has been evaluated; both frequentist and Bayesian methods are in use today. While historically the HWE formula was developed to examine the transmission of alleles in a population from one generation to the next, use of HWE concepts has expanded in human diseases studies to detect genotyping error and disease susceptibility (association); Ryckman and Williams (2008). Most analyses focus on trying to answer the question of whether a population is in HWE. They do not try to quantify how far from the equilibrium the population is. In this paper, we propose the use of a simple disequilibrium coefficient to a locus with two alleles. Based on the posterior density of this disequilibrium coefficient, we show how one can conduct a Bayesian analysis to verify how far from HWE a population is. There are other coefficients introduced in the literature and the advantage of the one introduced in this paper is the fact that, just like the standard correlation coefficients, its range is bounded and it is symmetric around zero (equilibrium) when comparing the positive and the negative values. To test the hypothesis of equilibrium, we use a simple Bayesian significance test, the Full Bayesian Significance Test (FBST); see Pereira, Stern andWechsler (2008) for a complete review. The disequilibrium coefficient proposed provides an easy and efficient way to make the analyses, especially if one uses Bayesian statistics. A routine in R programs (R Development Core Team, 2009) that implements the calculations is provided for the readers.
Resumo:
Ten cattle and 10 buffalo were divided into 2 groups (control [n = 8] and experimental [n = 12]) that received daily administration of copper. Three hepatic biopsies and blood samples were performed on days 0, 45, and 105. The concentration of hepatic copper was determined by spectrophotometric atomic absorption, and the activities of aspartate aminotransferase (AST) and gamma-glutamyl transferase (GGT) were analyzed. Regression analyses were done to verify the possible existing relationship between enzymatic activity and concentration of hepatic copper. Sensitivity, specificity, accuracy, and positive and negative predictive values were determined. The serum activities of AST and GGT had coefficients of determination that were excellent predictive indicators of hepatic copper accumulation in cattle, while only GGT serum activity was predictive of hepatic copper accumulation in buffalo. Elevated serum GGT activity may be indicative of increased concentrations of hepatic copper even in cattle and buffalo that appear to be clinically healthy. Thus, prophylactic measures can be implemented to prevent the onset of a hemolytic crisis that is characteristic of copper intoxication.
Resumo:
We present a new set of oscillator strengths for 142 Fe II lines in the wavelength range 4000-8000 angstrom. Our gf-values are both accurate and precise, because each multiplet was globally normalized using laboratory data ( accuracy), while the relative gf-values of individual lines within a given multiplet were obtained from theoretical calculations ( precision). Our line list was tested with the Sun and high-resolution (R approximate to 10(5)), high-S/N (approximate to 700-900) Keck+HIRES spectra of the metal-poor stars HD 148816 and HD 140283, for which line-to-line scatter (sigma) in the iron abundances from Fe II lines as low as 0.03, 0.04, and 0.05 dex are found, respectively. For these three stars the standard error in the mean iron abundance from Fe II lines is negligible (sigma(mean) <= 0.01 dex). The mean solar iron abundance obtained using our gf-values and different model atmospheres is A(Fe) = 7.45(sigma = 0.02).
Resumo:
The PHENIX experiment at the Relativistic Heavy Ion Collider has measured the invariant differential cross section for production of K(S)(0), omega, eta', and phi mesons in p + p collisions at root s 200 GeV. Measurements of omega and phi production in different decay channels give consistent results. New results for the omega are in agreement with previously published data and extend the measured p(T) coverage. The spectral shapes of all hadron transverse momentum distributions measured by PHENIX are well described by a Tsallis distribution functional form with only two parameters, n and T, determining the high-p(T) and characterizing the low-p(T) regions of the spectra, respectively. The values of these parameters are very similar for all analyzed meson spectra, but with a lower parameter T extracted for protons. The integrated invariant cross sections calculated from the fitted distributions are found to be consistent with existing measurements and with statistical model predictions.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
The metrological principles of neutron activation analysis are discussed. It has been demonstrated that this method can provide elemental amount of substance with values fully traceable to the SI. The method has been used by several laboratories worldwide in a number of CCQM key comparisons - interlaboratory comparison tests at the highest metrological level - supplying results equivalent to values from other methods for elemental or isotopic analysis in complex samples without the need to perform chemical destruction and dissolution of these samples. The CCOM accepted therefore in April 2007 the claim that neutron activation analysis should have the similar status as the methods originally listed by the CCOM as `primary methods of measurement`. Analytical characteristics and scope of application are given.
Resumo:
BACKGROUND: Xylitol is a sugar alcohol (polyalcohol) with many interesting properties for pharmaceutical and food products. It is currently produced by a chemical process, which has some disadvantages such as high energy requirement. Therefore microbiological production of xylitol has been studied as an alternative, but its viability is dependent on optimisation of the fermentation variables. Among these, aeration is fundamental, because xylitol is produced only under adequate oxygen availability. In most experiments with xylitol-producing yeasts, low oxygen transfer volumetric coefficient (K(L)a) values are used to maintain microaerobic conditions. However, in the present study the use of relatively high K(L)a values resulted in high xylitol production. The effect of aeration was also evaluated via the profiles of xylose reductase (XR) and xylitol clehydrogenase (XD) activities during the experiments. RESULTS: The highest XR specific activity (1.45 +/- 0.21 U mg(protein)(-1)) was achieved during the experiment with the lowest K(L)a value (12 h(-1)), while the highest XD specific activity (0.19 +/- 0.03 U mg(protein)(-1)) was observed with a K(L)a value of 25 h(-1). Xylitol production was enhanced when K(L)a was increased from 12 to 50 h(-1), which resulted in the best condition observed, corresponding to a xylitol volumetric productivity of 1.50 +/- 0.08 g(xylitol) L(-1) h(-1) and an efficiency of 71 +/- 6.0%. CONCLUSION: The results showed that the enzyme activities during xylitol bioproduction depend greatly on the initial KLa value (oxygen availability). This finding supplies important information for further studies in molecular biology and genetic engineering aimed at improving xylitol bioproduction. (C) 2008 Society of Chemical Industry
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
The practicability of estimating directional wave spectra based on a vessel`s 1st order response has been recently addressed by several researchers. Different alternatives regarding statistical inference methods and possible drawbacks that could arise from their application have been extensively discussed, with an apparent preference for estimations based on Bayesian inference algorithms. Most of the results on this matter, however, rely exclusively on numerical simulations or at best on few and sparse full-scale measurements, comprising a questionable basis for validation purposes. This paper discusses several issues that have recently been debated regarding the advantages of Bayesian inference and different alternatives for its implementation. Among those are the definition of the best set of input motions, the number of parameters required for guaranteeing smoothness of the spectrum in frequency and direction and how to determine their optimum values. These subjects are addressed in the light of an extensive experimental campaign performed with a small-scale model of an FPSO platform (VLCC hull), which was conducted in an ocean basin in Brazil. Tests involved long and short crested seas with variable levels of directional spreading and also bimodal conditions. The calibration spectra measured in the tank by means of an array of wave probes configured the paradigm for estimations. Results showed that a wide range of sea conditions could be estimated with good precision, even those with somewhat low peak periods. Some possible drawbacks that have been pointed out in previous works concerning the viability of employing large vessels for such a task are then refuted. Also, it is shown that a second parameter for smoothing the spectrum in frequency may indeed increase the accuracy in some situations, although the criterion usually proposed for estimating the optimum values (ABIC) demands large computational effort and does not seem adequate for practical on-board systems, which require expeditious estimations. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
This study aimed at evaluating the mechanical, physical and biological properties of laminated veneer lumber (LVL) made from Pinus oocarpa Schiede ex Schltdl (PO) and Pinus kesiya Royle ex Gordon (PK) and at providing a nondestructive characterization thereof. Four PO and four PK LVL boards from 22 randomly selected 2-mm thickness veneers were produced according to the following characteristics: phenol-formaldehyde (190 g/m(2)), hot-pressing at 150A degrees C for 45 min and 2.8 N/mm(2) of specific pressure. After board production, nondestructive evaluation was conducted, and stress wave velocity (v (0)) and dynamic modulus of elasticity (E (Md) ) were determined. The following mechanical and physical properties were then evaluated: static bending modulus of elasticity (E (M) ), modulus of rupture (f (M) ), compression strength parallel to grain (f (c,0)), shear strength parallel to glue-line (f (v,0)), shear strength perpendicular to glue-line (f (v,90)), thickness swelling (TS), water absorption (WA), and permanent thickness swelling (PTS) for 2, 24, and 96-hour of water immersion. Biological property was also evaluated by measuring the weight loss by Trametes versicolor (Linnaeus ex Fries) Pilat (white-rot) and Gloeophyllum trabeum (Persoon ex Fries.) Murrill (brown-rot). After hot-pressing, no bubbles, delamination nor warping were observed for both species. In general, PK boards presented higher mechanical properties: E (M) , E (Md) , f (M) , f (c,0) whereas PO boards were dimensionally more stable, with lower values of WA, TS and PTS in the 2, 24, and 96-hour immersion periods. Board density, f (v,0), f (v,90) and rot weight loss were statistically equal for PO and PK LVL. The prediction of flexural properties of consolidated LVL by the nondestructive method used was not very efficient, and the fitted models presented lower predictability.
Resumo:
This study aimed to evaluate adult emergence and duration of the pupal stage of the Mediterranean fruit fly, Ceratitis capitata (Wiedemann), and emergence of the fruit fly parasitoid, Diachasmimorpha longicaudata (Ashmead), under different moisture conditions in four soil types, using soil water matric potential Pupal stage duration in C capitata was influenced differently for males and females In females, only soil type affected pupal stage duration, which was longer in a clay soil In males, pupal stage duration was individually influenced by moisture and soil type, with a reduction in pupal stage duration in a heavy clay soil and in a sandy clay, with longer duration in the clay soil As allude potential decreased, duration of the pupal stage of C capitata males increased, regardless of soil type C capitata emergence was affected by moisture, regardless of soil type, and was higher in drier soils The emergence of D longicaudata adults was individually influenced by soil type and moisture factors, and the number of emerged D longicaudata adults was three times higher in sandy loam and lower in a heavy clay soil Always, the number of emerged adults was higher at higher moisture conditions C capitata and D longicaudata pupal development was affected by moisture and soil type, which may facilitate pest sampling and allow release areas for the parasitoid to be defined under field conditions.
Resumo:
This paper applies Hierarchical Bayesian Models to price farm-level yield insurance contracts. This methodology considers the temporal effect, the spatial dependence and spatio-temporal models. One of the major advantages of this framework is that an estimate of the premium rate is obtained directly from the posterior distribution. These methods were applied to a farm-level data set of soybean in the State of the Parana (Brazil), for the period between 1994 and 2003. The model selection was based on a posterior predictive criterion. This study improves considerably the estimation of the fair premium rates considering the small number of observations.
Resumo:
Over the years, crop insurance programs became the focus of agricultural policy in the USA, Spain, Mexico, and more recently in Brazil. Given the increasing interest in insurance, accurate calculation of the premium rate is of great importance. We address the crop-yield distribution issue and its implications in pricing an insurance contract considering the dynamic structure of the data and incorporating the spatial correlation in the Hierarchical Bayesian framework. Results show that empirical (insurers) rates are higher in low risk areas and lower in high risk areas. Such methodological improvement is primarily important in situations of limited data.