648 resultados para bayesian networks
Resumo:
A flexible and simple Bayesian decision-theoretic design for dose-finding trials is proposed in this paper. In order to reduce the computational burden, we adopt a working model with conjugate priors, which is flexible to fit all monotonic dose-toxicity curves and produces analytic posterior distributions. We also discuss how to use a proper utility function to reflect the interest of the trial. Patients are allocated based on not only the utility function but also the chosen dose selection rule. The most popular dose selection rule is the one-step-look-ahead (OSLA), which selects the best-so-far dose. A more complicated rule, such as the two-step-look-ahead, is theoretically more efficient than the OSLA only when the required distributional assumptions are met, which is, however, often not the case in practice. We carried out extensive simulation studies to evaluate these two dose selection rules and found that OSLA was often more efficient than two-step-look-ahead under the proposed Bayesian structure. Moreover, our simulation results show that the proposed Bayesian method's performance is superior to several popular Bayesian methods and that the negative impact of prior misspecification can be managed in the design stage.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
description and analysis of geographically indexed health data with respect to demographic, environmental, behavioural, socioeconomic, genetic, and infectious risk factors (Elliott andWartenberg 2004). Disease maps can be useful for estimating relative risk; ecological analyses, incorporating area and/or individual-level covariates; or cluster analyses (Lawson 2009). As aggregated data are often more readily available, one common method of mapping disease is to aggregate the counts of disease at some geographical areal level, and present them as choropleth maps (Devesa et al. 1999; Population Health Division 2006). Therefore, this chapter will focus exclusively on methods appropriate for areal data...
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
Interdependence is a central concept in systems and organizations, yet our methods for measuring it are not well developed. Here, we report on a novel method for transforming digital trace data into networks of events that can be used to visualize and measure interdependence. The edges in the network represent sequential flow and the vertices represent actors, actions and artifacts. We refer to this representation as an affordance network. As with conventional approaches such as process mining, our method uses input from a stream of time-stamped occurrences, but the representation is simpler and more appropriate for exploration and theory building. As digital trace data becomes more widely available, this method may become more useful in information systems research and practice. Like a thermometer, it helps us measure a basic property of a system that would otherwise be difficult to see.
Resumo:
For many complex natural resources problems, planning and management efforts involve groups of organizations working collaboratively through networks (Agranoff, 2007; Booher & Innes, 2010). These networks sometimes involve formal roles and relationships, but often include informal elements (Edelenbos & Klijn, 2007). All of these roles and relationships undergo change in response to changes in personnel, priorities and policy. There has been considerable focus in the planning and public policy literature on describing and characterizing these networks (Mandell & Keast, 2008; Provan & Kenis, 2007). However, there has been far less research assessing how networks change and adjust in response to policy and political change. In the Australian state of Queensland, Natural Resource Management (NRM) organizations were created as lead organizations to address land and water management issues on a regional basis with Commonwealth funding and state support. In 2012, a change in state government signaled a dramatic change in policy that resulted in a significant reduction of state support and commitment. In response to this change, NRM organizations have had to adapt their networks and relationships. In this study, we examine the issues of network relationships, capacity and changing relationships over time using written surveys and focus groups with NRM CEOs, managers and planners (note: data collection events scheduled for March and April 2015). The research team will meet with each of these three groups separately, conduct an in-person survey followed by a facilitated focus group discussion. The NRM participant focus groups will also be subdivided by region, which correlates with capacity (inland/low capacity; coastal/high capacity). The findings focus on how changes in state government commitment have affected NRM networks and their relationships with state agencies. We also examine how these changes vary according to the level within the organization and the capacity of the organization. We hypothesize that: (1) NRM organizations have struggled to maintain capacity in the wake of state agency withdrawal of support; (2) NRM organizations with the lowest capacity have been most adversely affected, while some high capacity NRM organizations may have become more resilient as they have sought out other partners; (3) Network relationships at the highest levels of the organization have been affected the most by state policy change; (4) NRM relationships at the lowest levels of the organizations have changed the least, as formal relationships are replaced by informal networks and relationships.
Resumo:
This thesis is a study of Chinese One Child Generation's digital and social sharing. It examines urban youth grassroots communities, including an urban farmers' community and volunteers in educational camps. These case studies explain the emergence of 'sharism' in reaction to the growing risks in China, such as food safety and environmental degradation emanating from China's rapid economic development, and growing urbanism, globalisation, and consumerism. The new forms of 'sharism' are linked to guanxi (social relations) and connected youth communities, which are made possible by increasing accessibility to digital and networked technologies.
Resumo:
We carried out a discriminant analysis with identity by descent (IBD) at each marker as inputs, and the sib pair type (affected-affected versus affected-unaffected) as the output. Using simple logistic regression for this discriminant analysis, we illustrate the importance of comparing models with different number of parameters. Such model comparisons are best carried out using either the Akaike information criterion (AIC) or the Bayesian information criterion (BIC). When AIC (or BIC) stepwise variable selection was applied to the German Asthma data set, a group of markers were selected which provide the best fit to the data (assuming an additive effect). Interestingly, these 25-26 markers were not identical to those with the highest (in magnitude) single-locus lod scores.
Resumo:
Background A pandemic strain of influenza A spread rapidly around the world in 2009, now referred to as pandemic (H1N1) 2009. This study aimed to examine the spatiotemporal variation in the transmission rate of pandemic (H1N1) 2009 associated with changes in local socio-environmental conditions from May 7–December 31, 2009, at a postal area level in Queensland, Australia. Method We used the data on laboratory-confirmed H1N1 cases to examine the spatiotemporal dynamics of transmission using a flexible Bayesian, space–time, Susceptible-Infected-Recovered (SIR) modelling approach. The model incorporated parameters describing spatiotemporal variation in H1N1 infection and local socio-environmental factors. Results The weekly transmission rate of pandemic (H1N1) 2009 was negatively associated with the weekly area-mean maximum temperature at a lag of 1 week (LMXT) (posterior mean: −0.341; 95% credible interval (CI): −0.370–−0.311) and the socio-economic index for area (SEIFA) (posterior mean: −0.003; 95% CI: −0.004–−0.001), and was positively associated with the product of LMXT and the weekly area-mean vapour pressure at a lag of 1 week (LVAP) (posterior mean: 0.008; 95% CI: 0.007–0.009). There was substantial spatiotemporal variation in transmission rate of pandemic (H1N1) 2009 across Queensland over the epidemic period. High random effects of estimated transmission rates were apparent in remote areas and some postal areas with higher proportion of indigenous populations and smaller overall populations. Conclusions Local SEIFA and local atmospheric conditions were associated with the transmission rate of pandemic (H1N1) 2009. The more populated regions displayed consistent and synchronized epidemics with low average transmission rates. The less populated regions had high average transmission rates with more variations during the H1N1 epidemic period.
Resumo:
The city system has been a prevailing research issue in the fields of urban geography and regional economics. Not only do the relationships between cities in the city system exist in the form of rankings, but also in a more general network form. Previous work has examined the spatial structure of the city system in terms of its separate industrial networks, such as in transportation and economic activity, but little has been done to compare different networks. To rectify this situation, this study analyzes and reveals the spatial structural features of China’s city system by comparing its transportation and economic urban networks, thus providing new avenues for research on China’s city network. The results indicate that these two networks relate with each other by sharing structural equivalence with a basic diamond structure and a layered intercity structure decreasing outwards from the national centers. A decoupling effect also exists between them as the transportation network contributes to a balanced regional development, while the economic network promotes agglomeration economies. The law of economic development and the government both play important roles in the articulation between these two networks, and the gap between them can be shortened by related policy reforms and the improvement of the transportation network.
Resumo:
In this paper, we examine approaches to estimate a Bayesian mixture model at both single and multiple time points for a sample of actual and simulated aerosol particle size distribution (PSD) data. For estimation of a mixture model at a single time point, we use Reversible Jump Markov Chain Monte Carlo (RJMCMC) to estimate mixture model parameters including the number of components which is assumed to be unknown. We compare the results of this approach to a commonly used estimation method in the aerosol physics literature. As PSD data is often measured over time, often at small time intervals, we also examine the use of an informative prior for estimation of the mixture parameters which takes into account the correlated nature of the parameters. The Bayesian mixture model offers a promising approach, providing advantages both in estimation and inference.
Resumo:
We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0.
Resumo:
In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.
Resumo:
We review the literature on the combined association between lung cancer and two environmental exposures, asbestos exposure and smoking, and explore a Bayesian approach to assess evidence of interaction between the exposures. The meta-analysis combines separate indices of additive and multiplicative relationships and multivariate relative risk estimates. By making inferences on posterior probabilities we can explore both the form and strength of interaction. This analysis may be more informative than providing evidence to support one relation over another on the basis of statistical significance. Overall, we find evidence for a more than additive and less than multiplicative relation.