978 resultados para Approximate Bayesian Computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, a non-autonomous (time-varying) semilinear system is considered and its approximate controllability is investigated. The notion of 'bounded integral contractor', introduced by Altman, has been exploited to obtain sufficient conditions for approximate controllability. This condition is weaker than Lipschitz condition. The main theorems of Naito [11, 12] are obtained as corollaries of our main results. An example is also given to show how our results weaken the conditions assumed by Sukavanam[17].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let A be a positive definite operator in a Hilbert space and consider the initial value problem for u(t) = -A(2)u. Using a representation of the semigroup exp(-A(2)t) in terms of the group exp(iAt) we express u in terms of the solution of the standard heat equation w(t) = W-yy, with initial values v solving the initial value problem for v(y) = iAv. This representation is used to construct a method for approximating u in terms of approximations of v. In the case that A is a 2(nd) order elliptic operator the method is combined with finite elements in the spatial variable and then reduces the solution of the 4(th) order equation for u to that of the 2(nd) order equation for v, followed by the solution of the heat equation in one space variable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the proper computational approach for the estimation of strain energy release rates by modified crack closure integral (MCCI). In particular, in the estimation of consistent nodal force vectors used in the MCCI expressions for quarter-point singular elements (wherein all the nodal force vectors participate in computation of strain energy release rates by MCCI). The numerical example of a centre crack tension specimen under uniform loading is presented to illustrate the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conventional procedure of determining the surface potential of clay platelet and the variation of potential with distance is lengthy and time consuming. Simplified graphical procedures using Gouy theory have been developed and presented. The new procedures are simple, accurate and very much less time consuming.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spatial data analysis has become more and more important in the studies of ecology and economics during the last decade. One focus of spatial data analysis is how to select predictors, variance functions and correlation functions. However, in general, the true covariance function is unknown and the working covariance structure is often misspecified. In this paper, our target is to find a good strategy to identify the best model from the candidate set using model selection criteria. This paper is to evaluate the ability of some information criteria (corrected Akaike information criterion, Bayesian information criterion (BIC) and residual information criterion (RIC)) for choosing the optimal model when the working correlation function, the working variance function and the working mean function are correct or misspecified. Simulations are carried out for small to moderate sample sizes. Four candidate covariance functions (exponential, Gaussian, Matern and rational quadratic) are used in simulation studies. With the summary in simulation results, we find that the misspecified working correlation structure can still capture some spatial correlation information in model fitting. When the sample size is large enough, BIC and RIC perform well even if the the working covariance is misspecified. Moreover, the performance of these information criteria is related to the average level of model fitting which can be indicated by the average adjusted R square ( [GRAPHICS] ), and overall RIC performs well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A flexible and simple Bayesian decision-theoretic design for dose-finding trials is proposed in this paper. In order to reduce the computational burden, we adopt a working model with conjugate priors, which is flexible to fit all monotonic dose-toxicity curves and produces analytic posterior distributions. We also discuss how to use a proper utility function to reflect the interest of the trial. Patients are allocated based on not only the utility function but also the chosen dose selection rule. The most popular dose selection rule is the one-step-look-ahead (OSLA), which selects the best-so-far dose. A more complicated rule, such as the two-step-look-ahead, is theoretically more efficient than the OSLA only when the required distributional assumptions are met, which is, however, often not the case in practice. We carried out extensive simulation studies to evaluate these two dose selection rules and found that OSLA was often more efficient than two-step-look-ahead under the proposed Bayesian structure. Moreover, our simulation results show that the proposed Bayesian method's performance is superior to several popular Bayesian methods and that the negative impact of prior misspecification can be managed in the design stage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The minimum cost classifier when general cost functionsare associated with the tasks of feature measurement and classification is formulated as a decision graph which does not reject class labels at intermediate stages. Noting its complexities, a heuristic procedure to simplify this scheme to a binary decision tree is presented. The optimizationof the binary tree in this context is carried out using ynamicprogramming. This technique is applied to the voiced-unvoiced-silence classification in speech processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

description and analysis of geographically indexed health data with respect to demographic, environmental, behavioural, socioeconomic, genetic, and infectious risk factors (Elliott andWartenberg 2004). Disease maps can be useful for estimating relative risk; ecological analyses, incorporating area and/or individual-level covariates; or cluster analyses (Lawson 2009). As aggregated data are often more readily available, one common method of mapping disease is to aggregate the counts of disease at some geographical areal level, and present them as choropleth maps (Devesa et al. 1999; Population Health Division 2006). Therefore, this chapter will focus exclusively on methods appropriate for areal data...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate closed-form solutions of the non-linear relative equations of motion of an interceptor pursuing a target under the realistic true proportional navigation (RTPN) guidance law are derived using the Adomian decomposition method in this article. In the literature, no study has been reported on derivation of explicit time-series solutions in closed form of the nonlinear dynamic engagement equations under the RTPN guidance. The Adomian method provides an analytical approximation, requiring no linearization or direct integration of the non-linear terms. The complete derivation of the Adomian polynomials for the analysis of the dynamics of engagement under RTPN guidance is presented for deterministic ideal case, and non-ideal dynamics in the loop that comprises autopilot and actuator dynamics and target manoeuvre, as well as, for a stochastic case. Numerical results illustrate the applicability of the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two-dimensional,q-state (q>4) Potts model is used as a testing ground for approximate theories of first-order phase transitions. In particular, the predictions of a theory analogous to the Ramakrishnan-Yussouff theory of freezing are compared with those of ordinary mean-field (Curie-Wiess) theory. It is found that the Curie-Weiss theory is a better approximation than the Ramakrishnan-Yussouff theory, even though the former neglects all fluctuations. It is shown that the Ramakrishnan-Yussouff theory overestimates the effects of fluctuations in this system. The reasons behind the failure of the Ramakrishnan-Yussouff approximation and the suitability of using the two-dimensional Potts model as a testing ground for these theories are discussed.