915 resultados para Bayesian p-values
Resumo:
Estudios afirman que la acidez gástrica es importante en la absorción de levotiroxina (LT4) con resultados controversiales sobre la interacción entre inhibidores de bomba de protones (IBP) y LT4. El objetivo del estudio fue establecer el efecto del uso concomitante de LT4 e IBP en los niveles de TSH en pacientes adultos con hipotiroidismo primario. Se realizó una revisión sistemática mediante búsqueda en Medline, Embase, Lilacs, Bireme, Scielo, Cochrane y Universidad de York, Access Pharmacy, Google Scholar, Dialnet y Opengray. La búsqueda no se limito por lenguaje. Se evaluó el efecto en la diferencia de medias de TSH luego del consumo de LT4 y luego del consumo concomitante con IBP. Se hizo un metaanálisis, análisis de subgrupos y análisis de sensibilidad utilizando el programa Review Manager 5.3. Se eligieron 5 artículos para el análisis cualitativo y 3 para el metaanálisis. La calidad de los estudios fue buena y el riesgo de sesgos bajo. La diferencia de medias obtenida fue 0.21 mUI/L (IC95%: 0.02-0.40; p=0.03; I2:0%). En el análisis de subgrupos en pacientes mayores de 55 años la diferencia de medias fue 0.21 mUI/L (IC95%: 0.01-0.40; p=0.27; I2:19%). En el análisis de sensibilidad se excluyo el estudio con mayor muestra y la diferencia de medias fue 0.49 mUI/L (IC95%: -0.12 a 1.11; p=0.12; I2:0%). La diferencia de medias de TSH luego del consumo concomitante no se considera clínicamente significativa pues no representa riesgo para el paciente. Son necesarios estudios clínicos aleatorizados y evaluar el efecto en los niveles de T4 libre.
Resumo:
Obtaining attribute values of non-chosen alternatives in a revealed preference context is challenging because non-chosen alternative attributes are unobserved by choosers, chooser perceptions of attribute values may not reflect reality, existing methods for imputing these values suffer from shortcomings, and obtaining non-chosen attribute values is resource intensive. This paper presents a unique Bayesian (multiple) Imputation Multinomial Logit model that imputes unobserved travel times and distances of non-chosen travel modes based on random draws from the conditional posterior distribution of missing values. The calibrated Bayesian (multiple) Imputation Multinomial Logit model imputes non-chosen time and distance values that convincingly replicate observed choice behavior. Although network skims were used for calibration, more realistic data such as supplemental geographically referenced surveys or stated preference data may be preferred. The model is ideally suited for imputing variation in intrazonal non-chosen mode attributes and for assessing the marginal impacts of travel policies, programs, or prices within traffic analysis zones.
Resumo:
Stochastic behavior of an aero-engine failure/repair process has been analyzed from a Bayesian perspective. Number of failures/repairs in the component-sockets of this multi-component system are assumed to follow independent renewal processes with Weibull inter-arrival times. Based on the field failure/repair data of a large number of such engines and independent Gamma priors on the scale parameters and log-concave priors on the shape parameters, an exact method of sampling from the resulting posterior distributions of the parameters has been proposed. These generated parameter values are next utilised in obtaining the posteriors of the expected number of system repairs, system failure rate, and the conditional intensity function, which are computed using a recursive formula.
Resumo:
A Bayesian probabilistic methodology for on-line structural health monitoring which addresses the issue of parameter uncertainty inherent in problem is presented. The method uses modal parameters for a limited number of modes identified from measurements taken at a restricted number of degrees of freedom of a structure as the measured structural data. The application presented uses a linear structural model whose stiffness matrix is parameterized to develop a class of possible models. Within the Bayesian framework, a joint probability density function (PDF) for the model stiffness parameters given the measured modal data is determined. Using this PDF, the marginal PDF of the stiffness parameter for each substructure given the data can be calculated.
Monitoring the health of a structure using these marginal PDFs involves two steps. First, the marginal PDF for each model parameter given modal data from the undamaged structure is found. The structure is then periodically monitored and updated marginal PDFs are determined. A measure of the difference between the calibrated and current marginal PDFs is used as a means to characterize the health of the structure. A procedure for interpreting the measure for use by an expert system in on-line monitoring is also introduced.
The probabilistic framework is developed in order to address the model parameter uncertainty issue inherent in the health monitoring problem. To illustrate this issue, consider a very simplified deterministic structural health monitoring method. In such an approach, the model parameters which minimize an error measure between the measured and model modal values would be used as the "best" model of the structure. Changes between the model parameters identified using modal data from the undamaged structure and subsequent modal data would be used to find the existence, location and degree of damage. Due to measurement noise, limited modal information, and model error, the "best" model parameters might vary from one modal dataset to the next without any damage present in the structure. Thus, difficulties would arise in separating normal variations in the identified model parameters based on limitations of the identification method and variations due to true change in the structure. The Bayesian framework described in this work provides a means to handle this parametric uncertainty.
The probabilistic health monitoring method is applied to simulated data and laboratory data. The results of these tests are presented.
Resumo:
In this paper we study parameter estimation for time series with asymmetric α-stable innovations. The proposed methods use a Poisson sum series representation (PSSR) for the asymmetric α-stable noise to express the process in a conditionally Gaussian framework. That allows us to implement Bayesian parameter estimation using Markov chain Monte Carlo (MCMC) methods. We further enhance the series representation by introducing a novel approximation of the series residual terms in which we are able to characterise the mean and variance of the approximation. Simulations illustrate the proposed framework applied to linear time series, estimating the model parameter values and model order P for an autoregressive (AR(P)) model driven by asymmetric α-stable innovations. © 2012 IEEE.
Resumo:
P-glycoprotein (P-gp), an ATP-binding cassette (ABC) transporter, functions as a biological barrier by extruding cytotoxic agents out of cells, resulting in an obstacle in chemotherapeutic treatment of cancer. In order to aid in the development of potential P-gp inhibitors, we constructed a quantitative structure-activity relationship (QSAR) model of flavonoids as P-gp inhibitors based on Bayesian-regularized neural network (BRNN). A dataset of 57 flavonoids collected from a literature binding to the C-terminal nucleotide-binding domain of mouse P-gp was compiled. The predictive ability of the model was assessed using a test set that was independent of the training set, which showed a standard error of prediction of 0.146 +/- 0.006 (data scaled from 0 to 1). Meanwhile, two other mathematical tools, back-propagation neural network (BPNN) and partial least squares (PLS) were also attempted to build QSAR models. The BRNN provided slightly better results for the test set compared to BPNN, but the difference was not significant according to F-statistic at p = 0.05. The PLS failed to build a reliable model in the present study. Our study indicates that the BRNN-based in silico model has good potential in facilitating the prediction of P-gp flavonoid inhibitors and might be applied in further drug design.
Resumo:
Prostatic intraepithelial neoplasia (PIN) diagnosis and grading are affected by uncertainties which arise from the fact that almost all knowledge of PIN histopathology is expressed in concepts, descriptive linguistic terms, and words. A Bayesian belief network (BBN) was therefore used to reduce the problem of uncertainty in diagnostic clue assessment, while still considering the dependences between elements in the reasoning sequence. A shallow network was used with an open-tree topology, with eight first-level descendant nodes for the diagnostic clues (evidence nodes), each independently linked by a conditional probability matrix to a root node containing the diagnostic alternatives (decision node). One of the evidence nodes was based on the tissue architecture and the others were based on cell features. The system was designed to be interactive, in that the histopathologist entered evidence into the network in the form of likelihood ratios for outcomes at each evidence node. The efficiency of the network was tested on a series of 110 prostate specimens, subdivided as follows: 22 cases of non-neoplastic prostate or benign prostatic tissue (NP), 22 PINs of low grade (PINlow), 22 PINs of high grade (PINhigh), 22 prostatic adenocarcinomas with cribriform pattern (PACcri), and 22 prostatic adenocarcinomas with large acinar pattern (PAClgac). The results obtained in the benign and malignant categories showed that the belief for the diagnostic alternatives is very high, the values being in general more than 0.8 and often close to 1.0. When considering the PIN lesions, the network classified and graded most of the cases with high certainty. However, there were some cases which showed values less than 0.8 (13 cases out of 44), thus indicating that there are situations in which the feature changes are intermediate between contiguous categories or grades. Discrepancy between morphological grading and the BBN results was observed in four out of 44 PIN cases: one PINlow was classified as PINhigh and three PINhigh were classified as PINlow. In conclusion, the network can grade PlN lesions and differentiate them from other prostate lesions with certainty. In particular, it offers a descriptive classifier which is readily implemented and which allows the use of linguistic, fuzzy variables.
Resumo:
Collaborative networks are typically formed by heterogeneous and autonomous entities, and thus it is natural that each member has its own set of core-values. Since these values somehow drive the behaviour of the involved entities, the ability to quickly identify partners with compatible or common core-values represents an important element for the success of collaborative networks. However, tools to assess or measure the level of alignment of core-values are lacking. Since the concept of 'alignment' in this context is still ill-defined and shows a multifaceted nature, three perspectives are discussed. The first one uses a causal maps approach in order to capture, structure, and represent the influence relationships among core-values. This representation provides the basis to measure the alignment in terms of the structural similarity and influence among value systems. The second perspective considers the compatibility and incompatibility among core-values in order to define the alignment level. Under this perspective we propose a fuzzy inference system to estimate the alignment level, since this approach allows dealing with variables that are vaguely defined, and whose inter-relationships are difficult to define. Another advantage provided by this method is the possibility to incorporate expert human judgment in the definition of the alignment level. The last perspective uses a belief Bayesian network method, and was selected in order to assess the alignment level based on members' past behaviour. An example of application is presented where the details of each method are discussed.
Resumo:
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
We consider the forecasting of macroeconomic variables that are subject to revisions, using Bayesian vintage-based vector autoregressions. The prior incorporates the belief that, after the first few data releases, subsequent ones are likely to consist of revisions that are largely unpredictable. The Bayesian approach allows the joint modelling of the data revisions of more than one variable, while keeping the concomitant increase in parameter estimation uncertainty manageable. Our model provides markedly more accurate forecasts of post-revision values of inflation than do other models in the literature.
Resumo:
Phylogenetic analyses of chloroplast DNA sequences, morphology, and combined data have provided consistent support for many of the major branches within the angiosperm, clade Dipsacales. Here we use sequences from three mitochondrial loci to test the existing broad scale phylogeny and in an attempt to resolve several relationships that have remained uncertain. Parsimony, maximum likelihood, and Bayesian analyses of a combined mitochondrial data set recover trees broadly consistent with previous studies, although resolution and support are lower than in the largest chloroplast analyses. Combining chloroplast and mitochondrial data results in a generally well-resolved and very strongly supported topology but the previously recognized problem areas remain. To investigate why these relationships have been difficult to resolve we conducted a series of experiments using different data partitions and heterogeneous substitution models. Usually more complex modeling schemes are favored regardless of the partitions recognized but model choice had little effect on topology or support values. In contrast there are consistent but weakly supported differences in the topologies recovered from coding and non-coding matrices. These conflicts directly correspond to relationships that were poorly resolved in analyses of the full combined chloroplast-mitochondrial data set. We suggest incongruent signal has contributed to our inability to confidently resolve these problem areas. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
Genomewide marker information can improve the reliability of breeding value predictions for young selection candidates in genomic selection. However, the cost of genotyping limits its use to elite animals, and how such selective genotyping affects predictive ability of genomic selection models is an open question. We performed a simulation study to evaluate the quality of breeding value predictions for selection candidates based on different selective genotyping strategies in a population undergoing selection. The genome consisted of 10 chromosomes of 100 cM each. After 5,000 generations of random mating with a population size of 100 (50 males and 50 females), generation G(0) (reference population) was produced via a full factorial mating between the 50 males and 50 females from generation 5,000. Different levels of selection intensities (animals with the largest yield deviation value) in G(0) or random sampling (no selection) were used to produce offspring of G(0) generation (G(1)). Five genotyping strategies were used to choose 500 animals in G(0) to be genotyped: 1) Random: randomly selected animals, 2) Top: animals with largest yield deviation values, 3) Bottom: animals with lowest yield deviations values, 4) Extreme: animals with the 250 largest and the 250 lowest yield deviations values, and 5) Less Related: less genetically related animals. The number of individuals in G(0) and G(1) was fixed at 2,500 each, and different levels of heritability were considered (0.10, 0.25, and 0.50). Additionally, all 5 selective genotyping strategies (Random, Top, Bottom, Extreme, and Less Related) were applied to an indicator trait in generation G(0), and the results were evaluated for the target trait in generation G(1), with the genetic correlation between the 2 traits set to 0.50. The 5 genotyping strategies applied to individuals in G(0) (reference population) were compared in terms of their ability to predict the genetic values of the animals in G(1) (selection candidates). Lower correlations between genomic-based estimates of breeding values (GEBV) and true breeding values (TBV) were obtained when using the Bottom strategy. For Random, Extreme, and Less Related strategies, the correlation between GEBV and TBV became slightly larger as selection intensity decreased and was largest when no selection occurred. These 3 strategies were better than the Top approach. In addition, the Extreme, Random, and Less Related strategies had smaller predictive mean squared errors (PMSE) followed by the Top and Bottom methods. Overall, the Extreme genotyping strategy led to the best predictive ability of breeding values, indicating that animals with extreme yield deviations values in a reference population are the most informative when training genomic selection models.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)