997 resultados para Bayesian reasoning


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Source Monitoring Framework is a promising model of constructive memory, yet fails because it is connectionist and does not allow content tagging. The Dual-Process Signal Detection Model is an improvement because it reduces mnemic qualia to a single memory signal (or degree of belief), but still commits itself to non-discrete representation. By supposing that ‘tagging’ means the assignment of propositional attitudes to aggregates of anemic characteristics informed inductively, then a discrete model becomes plausible. A Bayesian model of source monitoring accounts for the continuous variation of inputs and assignment of prior probabilities to memory content. A modified version of the High-Threshold Dual-Process model is recommended to further source monitoring research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What type of probability theory best describes the way humans make judgments under uncertainty and decisions under conflict? Although rational models of cognition have become prominent and have achieved much success, they adhere to the laws of classical probability theory despite the fact that human reasoning does not always conform to these laws. For this reason we have seen the recent emergence of models based on an alternative probabilistic framework drawn from quantum theory. These quantum models show promise in addressing cognitive phenomena that have proven recalcitrant to modeling by means of classical probability theory. This review compares and contrasts probabilistic models based on Bayesian or classical versus quantum principles, and highlights the advantages and disadvantages of each approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The total entropy utility function is considered for the dual purpose of Bayesian design for model discrimination and parameter estimation. A sequential design setting is proposed where it is shown how to efficiently estimate the total entropy utility for a wide variety of data types. Utility estimation relies on forming particle approximations to a number of intractable integrals which is afforded by the use of the sequential Monte Carlo algorithm for Bayesian inference. A number of motivating examples are considered for demonstrating the performance of total entropy in comparison to utilities for model discrimination and parameter estimation. The results suggest that the total entropy utility selects designs which are efficient under both experimental goals with little compromise in achieving either goal. As such, the total entropy utility is advocated as a general utility for Bayesian design in the presence of model uncertainty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper it is demonstrated how the Bayesian parametric bootstrap can be adapted to models with intractable likelihoods. The approach is most appealing when the semi-automatic approximate Bayesian computation (ABC) summary statistics are selected. After a pilot run of ABC, the likelihood-free parametric bootstrap approach requires very few model simulations to produce an approximate posterior, which can be a useful approximation in its own right. An alternative is to use this approximation as a proposal distribution in ABC algorithms to make them more efficient. In this paper, the parametric bootstrap approximation is used to form the initial importance distribution for the sequential Monte Carlo and the ABC importance and rejection sampling algorithms. The new approach is illustrated through a simulation study of the univariate g-and- k quantile distribution, and is used to infer parameter values of a stochastic model describing expanding melanoma cell colonies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inverse temperature hyperparameter of the hidden Potts model governs the strength of spatial cohesion and therefore has a substantial influence over the resulting model fit. The difficulty arises from the dependence of an intractable normalising constant on the value of the inverse temperature, thus there is no closed form solution for sampling from the distribution directly. We review three computational approaches for addressing this issue, namely pseudolikelihood, path sampling, and the approximate exchange algorithm. We compare the accuracy and scalability of these methods using a simulation study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examined the effect of an educational intervention utilizing principles of cognitive apprenticeship on students’ ability to apply clinical reasoning skills within the context of a purpose-built clinical vignette. A quasi-experimental, non-equivalent control-group design was used to evaluate the effect of the educational intervention on students’ accuracy, inaccuracy and self-confidence in clinical reasoning. This study makes an important contribution to nursing education by providing evidence to understand how best to facilitate nursing students’ development of clinical reasoning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To further investigate susceptibility loci identified by genome-wide association studies, we genotyped 5,500 SNPs across 14 associated regions in 8,000 samples from a control group and 3 diseases: type 2 diabetes (T2D), coronary artery disease (CAD) and Graves' disease. We defined, using Bayes theorem, credible sets of SNPs that were 95% likely, based on posterior probability, to contain the causal disease-associated SNPs. In 3 of the 14 regions, TCF7L2 (T2D), CTLA4 (Graves' disease) and CDKN2A-CDKN2B (T2D), much of the posterior probability rested on a single SNP, and, in 4 other regions (CDKN2A-CDKN2B (CAD) and CDKAL1, FTO and HHEX (T2D)), the 95% sets were small, thereby excluding most SNPs as potentially causal. Very few SNPs in our credible sets had annotated functions, illustrating the limitations in understanding the mechanisms underlying susceptibility to common diseases. Our results also show the value of more detailed mapping to target sequences for functional studies. © 2012 Nature America, Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In [8] the authors developed a logical system based on the definition of a new non-classical connective ⊗ capturing the notion of reparative obligation. The system proved to be appropriate for handling well-known contrary-to-duty paradoxes but no model-theoretic semantics was presented. In this paper we fill the gap and define a suitable possible-world semantics for the system for which we can prove soundness and completeness. The semantics is a preference-based non-normal one extending and generalizing semantics for classical modal logics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in cancer survival. Multilevel models assume independent geographical areas, whereas spatial models explicitly incorporate geographical correlation, often via a conditional autoregressive prior. However the relative merits of these methods for large population-based studies have not been explored. Using a case-study approach, we report on the implications of using multilevel and spatial survival models to study geographical inequalities in all-cause survival. Methods Multilevel discrete-time and Bayesian spatial survival models were used to study geographical inequalities in all-cause survival for a population-based colorectal cancer cohort of 22,727 cases aged 20–84 years diagnosed during 1997–2007 from Queensland, Australia. Results Both approaches were viable on this large dataset, and produced similar estimates of the fixed effects. After adding area-level covariates, the between-area variability in survival using multilevel discrete-time models was no longer significant. Spatial inequalities in survival were also markedly reduced after adjusting for aggregated area-level covariates. Only the multilevel approach however, provided an estimation of the contribution of geographical variation to the total variation in survival between individual patients. Conclusions With little difference observed between the two approaches in the estimation of fixed effects, multilevel models should be favored if there is a clear hierarchical data structure and measuring the independent impact of individual- and area-level effects on survival differences is of primary interest. Bayesian spatial analyses may be preferred if spatial correlation between areas is important and if the priority is to assess small-area variations in survival and map spatial patterns. Both approaches can be readily fitted to geographically enabled survival data from international settings

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bayesian networks (BNs) are tools for representing expert knowledge or evidence. They are especially useful for synthesising evidence or belief concerning a complex intervention, assessing the sensitivity of outcomes to different situations or contextual frameworks and framing decision problems that involve alternative types of intervention. Bayesian networks are useful extensions to logic maps when initiating a review or to facilitate synthesis and bridge the gap between evidence acquisition and decision-making. Formal elicitation techniques allow development of BNs on the basis of expert opinion. Such applications are useful alternatives to ‘empty’ reviews, which identify knowledge gaps but fail to support decision-making. Where review evidence exists, it can inform the development of a BN. We illustrate the construction of a BN using a motivating example that demonstrates how BNs can ensure coherence, transparently structure the problem addressed by a complex intervention and assess sensitivity to context, all of which are critical components of robust reviews of complex interventions. We suggest that BNs should be utilised to routinely synthesise reviews of complex interventions or empty reviews where decisions must be made despite poor evidence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gene expression is arguably the most important indicator of biological function. Thus identifying differentially expressed genes is one of the main aims of high throughout studies that use microarray and RNAseq platforms to study deregulated cellular pathways. There are many tools for analysing differentia gene expression from transciptomic datasets. The major challenge of this topic is to estimate gene expression variance due to the high amount of ‘background noise’ that is generated from biological equipment and the lack of biological replicates. Bayesian inference has been widely used in the bioinformatics field. In this work, we reveal that the prior knowledge employed in the Bayesian framework also helps to improve the accuracy of differential gene expression analysis when using a small number of replicates. We have developed a differential analysis tool that uses Bayesian estimation of the variance of gene expression for use with small numbers of biological replicates. Our method is more consistent when compared to the widely used cyber-t tool that successfully introduced the Bayesian framework to differential analysis. We also provide a user-friendly web based Graphic User Interface for biologists to use with microarray and RNAseq data. Bayesian inference can compensate for the instability of variance caused when using a small number of biological replicates by using pseudo replicates as prior knowledge. We also show that our new strategy to select pseudo replicates will improve the performance of the analysis. - See more at: http://www.eurekaselect.com/node/138761/article#sthash.VeK9xl5k.dpuf

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cancer is the leading contributor to the disease burden in Australia. This thesis develops and applies Bayesian hierarchical models to facilitate an investigation of the spatial and temporal associations for cancer diagnosis and survival among Queenslanders. The key objectives are to document and quantify the importance of spatial inequalities, explore factors influencing these inequalities, and investigate how spatial inequalities change over time. Existing Bayesian hierarchical models are refined, new models and methods developed, and tangible benefits obtained for cancer patients in Queensland. The versatility of using Bayesian models in cancer control are clearly demonstrated through these detailed and comprehensive analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ship seakeeping operability refers to the quantification of motion performance in waves relative to mission requirements. This is used to make decisions about preferred vessel designs, but it can also be used as comprehensive assessment of the benefits of ship-motion-control systems. Traditionally, operability computation aggregates statistics of motion computed over over the envelope of likely environmental conditions in order to determine a coefficient in the range from 0 to 1 called operability. When used for assessment of motion-control systems, the increase of operability is taken as the key performance indicator. The operability coefficient is often given the interpretation of the percentage of time operable. This paper considers an alternative probabilistic approach to this traditional computation of operability. It characterises operability not as a number to which a frequency interpretation is attached, but as a hypothesis that a vessel will attain the desired performance in one mission considering the envelope of likely operational conditions. This enables the use of Bayesian theory to compute the probability of that this hypothesis is true conditional on data from simulations. Thus, the metric considered is the probability of operability. This formulation not only adheres to recent developments in reliability and risk analysis, but also allows incorporating into the analysis more accurate descriptions of ship-motion-control systems since the analysis is not limited to linear ship responses in the frequency domain. The paper also discusses an extension of the approach to the case of assessment of increased levels of autonomy for unmanned marine craft.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).