9 resultados para Bayesian Modeling Averaging

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A benefit function transfer obtains estimates of willingness-to-pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the benefit transfer model. A more expensive alternative to estimate WTP is to analyze only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian model averaging (BMA) techniques can be used to optimally combine information from all models. The Bayesian algorithm searches for the set of sites that can form the basis for estimating a benefit function and reveals whether such information can be transferred to new sites for which only a small data set is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is 'poolable'. © 2008 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The task is usually tackled by running the Expectation-Maximization (EM) algorithm several times in order to obtain a high log-likelihood estimate. We argue that choosing the maximum log-likelihood estimate (as well as the maximum penalized log-likelihood and the maximum a posteriori estimate) has severe drawbacks, being affected both by overfitting and model uncertainty. Two ideas are discussed to overcome these issues: a maximum entropy approach and a Bayesian model averaging approach. Both ideas can be easily applied on top of EM, while the entropy idea can be also implemented in a more sophisticated way, through a dedicated non-linear solver. A vast set of experiments shows that these ideas produce significantly better estimates and inferences than the traditional and widely used maximum (penalized) log-likelihood and maximum a posteriori estimates. In particular, if EM is adopted as optimization engine, the model averaging approach is the best performing one; its performance is matched by the entropy approach when implemented using the non-linear solver. The results suggest that the applicability of these ideas is immediate (they are easy to implement and to integrate in currently available inference engines) and that they constitute a better way to learn Bayesian network parameters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently, Bayesian statistical software has been developed for age-depth modeling (wiggle-match dating) of sequences of densely spaced radiocarbon dates from peat cores. The method is described in non-statistical terms, and is compared with an alternative method of chronological ordering of 14C dates. Case studies include the dating of the start of agriculture in the northeastern part of the Netherlands, and of a possible Hekla-3 tephra layer in the same country. We discuss future enhancements in Bayesian age modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper introduces a new modeling approach that represents the waiting times in an Accident and Emergency (A&E) Department in a UK based National Health Service (NHS) hospital. The technique uses Bayesian networks to capture the heterogeneity of arriving patients by representing how patient covariates interact to influence their waiting times in the department. Such waiting times have been reviewed by the NHS as a means of investigating the efficiency of A&E departments (Emergency Rooms) and how they operate. As a result activity targets are now established based on the patient total waiting times with much emphasis on trolley waits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper introduces a new modeling approach that represents the waiting times in an accident and emergency (A&E) department in a UK based national health service (NHS) hospital. The technique uses Bayesian networks to capture the heterogeneity of arriving patients by representing how patient covariates interact to influence their waiting times in the department. Such waiting times have been reviewed by the NHS as a means of investigating the efficiency of A&E departments (emergency rooms) and how they operate. As a result activity targets are now established based on the patient total waiting times with much emphasis on trolley waits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the inverse Gaussian distribution, as a less complex alternative to the classical log-normal model, to describe turbulence-induced fading in free-space optical (FSO) systems operating in weak turbulence conditions and/or in the presence of aperture averaging effects. By conducting goodness of fit tests, we define the range of values of the scintillation index for various multiple-input multiple-output (MIMO) FSO configurations, where the two distributions approximate each other with a certain significance level. Furthermore, the bit error rate performance of two typical MIMO FSO systems is investigated over the new turbulence model; an intensity-modulation/direct detection MIMO FSO system with Q-ary pulse position modulation that employs repetition coding at the transmitter and equal gain combining at the receiver, and a heterodyne MIMO FSO system with differential phase-shift keying and maximal ratio combining at the receiver. Finally, numerical results are presented that validate the theoretical analysis and provide useful insights into the implications of the model parameters on the overall system performance. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Age-depth modeling using Bayesian statistics requires well-informed prior information about the behavior of sediment accumulation. Here we present average sediment accumulation rates (represented as deposition times, DT, in yr/cm) for lakes in an Arctic setting, and we examine the variability across space (intra- and inter-lake) and time (late Holocene). The dataset includes over 100 radiocarbon dates, primarily on bulk sediment, from 22 sediment cores obtained from 18 lakes spanning the boreal to tundra ecotone gradients in subarctic Canada. There are four to twenty-five radiocarbon dates per core, depending on the length and character of the sediment records. Deposition times were calculated at 100-year intervals from age-depth models constructed using the ‘classical’ age-depth modeling software Clam. Lakes in boreal settings have the most rapid accumulation (mean DT 20 ± 10 years), whereas lakes in tundra settings accumulate at moderate (mean DT 70 ± 10 years) to very slow rates, (>100 yr/cm). Many of the age-depth models demonstrate fluctuations in accumulation that coincide with lake evolution and post-glacial climate change. Ten of our sediment cores yielded sediments as old as c. 9,000 cal BP (BP = years before AD 1950). From between c. 9,000 cal BP and c. 6,000 cal BP, sediment accumulation was relatively rapid (DT of 20 to 60 yr/cm). Accumulation slowed between c. 5,500 and c. 4,000 cal BP as vegetation expanded northward in response to warming. A short period of rapid accumulation occurred near 1,200 cal BP at three lakes. Our research will help inform priors in Bayesian age modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).