959 resultados para Bayesian inference on precipitation
Resumo:
The synthetic hydrous niobium oxide has been used for phosphate removal from the aqueous solutions. The kinetic data correspond very well to the pseudo second-order equation The phosphate removal tended. to increase with a decrease of pH. The equilibrium data describe very well the Langmuir isotherm. The peak appearing at 1050 cm(-1) in IR spectra after adsorption was attributed to the bending vibration of adsorbed phosphate. The adsorption capacities are high, and increased with increasing temperature. The evaluated Delta G degrees and Delta H degrees indicate the spontaneous and endothermic nature of the reactions. The adsorptions occur with increase in entropy (Delta S positive) value suggest increase in randomness at the solid-liquid interface during the adsorption. A phosphate desorbability of approximately 60% was observed with water at pH 12, which indicated a relatively strong bonding between the adsorbed phosphate and the sorptive sites on the surface of the adsorbent. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
In this paper, a supervisor system, able to diagnose different types of faults during the operation of a proton exchange membrane fuel cell is introduced. The diagnosis is developed by applying Bayesian networks, which qualify and quantify the cause-effect relationship among the variables of the process. The fault diagnosis is based on the on-line monitoring of variables easy to measure in the machine such as voltage, electric current, and temperature. The equipment is a fuel cell system which can operate even when a fault occurs. The fault effects are based on experiments on the fault tolerant fuel cell, which are reproduced in a fuel cell model. A database of fault records is constructed from the fuel cell model, improving the generation time and avoiding permanent damage to the equipment. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The 475 degrees C embrittlement in stainless steels is a well-known phenomenon associated to alpha prime (alpha`) formed by precipitation or spinodal decomposition. Many doubts still remain on the mechanism of alpha` formation and its consequence on deformation and fracture mechanisms and corrosion resistance. In this investigation, the fracture behavior and corrosion resistance of two high performance ferritic stainless steels were investigated: a superferritic DIN 1.4575 and MA 956 superalloy were evaluated. Samples of both stainless steels (SS) were aged at 475 degrees C for periods varying from 1 to 1,080 h. Their fracture surfaces were observed using scanning electron microscopy (SEM) and the cleavage planes were determined by electron backscattering diffraction (EBSD). Some samples were tested for corrosion resistance using electrochemical impedance spectroscopy (EIS) and potentiodynamic polarization. Brittle and ductile fractures were observed in both ferritic stainless steels after aging at 475 degrees C. For aging periods longer than 500 h, the ductile fracture regions completely disappeared. The cleavage plane in the DIN 1.4575 samples aged at 475 degrees C for 1,080 h was mainly {110}, however the {102}, {314}, and {131} families of planes were also detected. The pitting corrosion resistance decreased with aging at 475 degrees C. The effect of alpha prime on the corrosion resistance was more significant in the DIN 1.4575 SS comparatively to the Incoloy MA 956.
Resumo:
The effect of precipitation on the corrosion resistance of AISI 316L(N) stainless steel previously exposed to creep tests at 600 degrees C for periods of up to 10 years, has been studied. The corrosion resistance was investigated in 2 M H(2)SO(4)+0.5 M NaCl+0.01 M KSCN solution at 30 degrees C by electrochemical methods. The results showed that the susceptibility to intergranular corrosion was highly affected by aging at 600 degrees C and creep testing time. The intergranular corrosion resistance decreased by more than twenty times when the creep testing time increased from 7500 h to 85,000 h. The tendency to passivation decreased and less protective films were formed on the creep tested samples. All tested samples also showed susceptibility to pitting. Grain boundary M(23)C(6) carbides were not found after long-term exposure at 600 degrees C and the corrosion behavior of the creep tested samples was attributed to intermetallic phases (mainly sigma phase) precipitation. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
BACKGROUND: The use of the volatile salt ammonium carbamate in protein downstream processing has recently been proposed. The main advantage of using volatile salts is that they can be removed from precipitates and liquid effluents through pressure reduction or temperature increase. Although previous studies showed that ammonium carbamate is efficient as a precipitant agent, there was evidence of denaturation in some enzymes. In this work, the effect of ammonium carbamate on the stability of five enzymes was evaluated. RESULTS: Activity assays showed that alpha-amylase (1,4-alpha-D-glucan glucanohydrolase, EC 3.2.1.1), lysozyme (1,4-beta-N-acetylmuramoylhydrolase, EC 3.2.1.17) and lipase (triacyl glycerol acyl hydrolase, EC 3.1.1.3) did not undergo activity loss in ammonium carbamate solutions with concentrations from 1.0 to 5.0 mol kg(-1), whereas cellulase complex (1,4-(1,3 : 14)-beta-D-glucan 4-glucano-hydrolase, EC 3.2.1.4) and peroxidase (hydrogen peroxide oxidoreductase, EC 1.11.1.7) showed an average activity loss of 55% and 44%, respectively. Precipitation assays did not show enzyme denaturation or phase separation for alpha-amylase and lipase, while celullase and peroxidase precipitated with some activity reduction. Analysis of similar experiments with ammonium and sodium sulfate did not affect the activity of enzymes. CONCLUSION: Celullase and peroxidase were denatured by ammonium carbamate. While more systematic studies are not available, care must be taken in designing a protein precipitation with this salt. The results suggest that the generally accepted idea that salts that denature proteins tend to solubilize them does not hold for ammonium carbamate. (C) 2010 Society of Chemical Industry
Resumo:
Thermodynamic relations between the solubility of a protein and the solution pH are presented in this work. The hypotheses behind the development are that the protein chemical potential in liquid phase can be described by Henry`s law and that the solid-liquid equilibrium is established only between neutral molecules. The mathematical development results in an analytical expression of the solubility curve, as a function of the ionization equilibrium constants, the pH and the solubility at the isoelectric point. It is shown that the same equation can be obtained either by directly calculating the fraction of neutral protein molecules or by integrating the curve of the protein average charge. The methodology was successfully applied to the description of the solubility of porcine insulin as a function of pH at three different temperatures and of bovine beta-lactoglobulin at four different ionic strengths. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
Capybaras were monitored weekly from 1998 to 2006 by counting individuals in three anthropogenic environments (mixed agricultural fields, forest and open areas) of southeastern Brazil in order to examine the possible influence of environmental variables (temperature, humidity, wind speed, precipitation and global radiation) on the detectability of this species. There was consistent seasonality in the number of capybaras in the study area, with a specific seasonal pattern in each area. Log-linear models were fitted to the sample counts of adult capybaras separately for each sampled area, with an allowance for monthly effects, time trends and the effects of environmental variables. Log-linear models containing effects for the months of the year and a quartic time trend were highly significant. The effects of environmental variables on sample counts were different in each type of environment. As environmental variables affect capybara detectability, they should be considered in future species survey/monitoring programs.
Resumo:
This paper applies Hierarchical Bayesian Models to price farm-level yield insurance contracts. This methodology considers the temporal effect, the spatial dependence and spatio-temporal models. One of the major advantages of this framework is that an estimate of the premium rate is obtained directly from the posterior distribution. These methods were applied to a farm-level data set of soybean in the State of the Parana (Brazil), for the period between 1994 and 2003. The model selection was based on a posterior predictive criterion. This study improves considerably the estimation of the fair premium rates considering the small number of observations.
Resumo:
Genetic recombination can produce heterogeneous phylogenetic histories within a set of homologous genes. Delineating recombination events is important in the study of molecular evolution, as inference of such events provides a clearer picture of the phylogenetic relationships among different gene sequences or genomes. Nevertheless, detecting recombination events can be a daunting task, as the performance of different recombination-detecting approaches can vary, depending on evolutionary events that take place after recombination. We recently evaluated the effects of post-recombination events on the prediction accuracy of recombination-detecting approaches using simulated nucleotide sequence data. The main conclusion, supported by other studies, is that one should not depend on a single method when searching for recombination events. In this paper, we introduce a two-phase strategy, applying three statistical measures to detect the occurrence of recombination events, and a Bayesian phylogenetic approach in delineating breakpoints of such events in nucleotide sequences. We evaluate the performance of these approaches using simulated data, and demonstrate the applicability of this strategy to empirical data. The two-phase strategy proves to be time-efficient when applied to large datasets, and yields high-confidence results.
Resumo:
The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.
Resumo:
Genetic recombination can produce heterogeneous phylogenetic histories within a set of homologous genes. Delineating recombination events is important in the study of molecular evolution, as inference of such events provides a clearer picture of the phylogenetic relationships among different gene sequences or genomes. Nevertheless, detecting recombination events can be a daunting task, as the performance of different recombination-detecting approaches can vary, depending on evolutionary events that take place after recombination. We previously evaluated the effects of post-recombination events on the prediction accuracy of recombination-detecting approaches using simulated nucleotide sequence data. The main conclusion, supported by other studies, is that one should not depend on a single method when searching for recombination events. In this paper, we introduce a two-phase strategy, applying three statistical measures to detect the occurrence of recombination events, and a Bayesian phylogenetic approach to delineate breakpoints of such events in nucleotide sequences. We evaluate the performance of these approaches using simulated data, and demonstrate the applicability of this strategy to empirical data. The two-phase strategy proves to be time-efficient when applied to large datasets, and yields high-confidence results.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).