973 resultados para Approximations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generalized linear mixed models (GLMM) are generalized linear models with normally distributed random effects in the linear predictor. Penalized quasi-likelihood (PQL), an approximate method of inference in GLMMs, involves repeated fitting of linear mixed models with “working” dependent variables and iterative weights that depend on parameter estimates from the previous cycle of iteration. The generality of PQL, and its implementation in commercially available software, has encouraged the application of GLMMs in many scientific fields. Caution is needed, however, since PQL may sometimes yield badly biased estimates of variance components, especially with binary outcomes. Recent developments in numerical integration, including adaptive Gaussian quadrature, higher order Laplace expansions, stochastic integration and Markov chain Monte Carlo (MCMC) algorithms, provide attractive alternatives to PQL for approximate likelihood inference in GLMMs. Analyses of some well known datasets, and simulations based on these analyses, suggest that PQL still performs remarkably well in comparison with more elaborate procedures in many practical situations. Adaptive Gaussian quadrature is a viable alternative for nested designs where the numerical integration is limited to a small number of dimensions. Higher order Laplace approximations hold the promise of accurate inference more generally. MCMC is likely the method of choice for the most complex problems that involve high dimensional integrals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model, normal base measures and Gibbs sampling procedures based on the Pólya urn scheme are often used to simulate posterior draws. These algorithms are applicable in the conjugate case when (for a normal base measure) the likelihood is normal. In the non-conjugate case, the algorithms proposed by MacEachern and Müller (1998) and Neal (2000) are often applied to generate posterior samples. Some common problems associated with simulation algorithms for non-conjugate MDP models include convergence and mixing difficulties. This paper proposes an algorithm based on the Pólya urn scheme that extends the Gibbs sampling algorithms to non-conjugate models with normal base measures and exponential family likelihoods. The algorithm proceeds by making Laplace approximations to the likelihood function, thereby reducing the procedure to that of conjugate normal MDP models. To ensure the validity of the stationary distribution in the non-conjugate case, the proposals are accepted or rejected by a Metropolis-Hastings step. In the special case where the data are normally distributed, the algorithm is identical to the Gibbs sampler.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suppose that we are interested in establishing simple, but reliable rules for predicting future t-year survivors via censored regression models. In this article, we present inference procedures for evaluating such binary classification rules based on various prediction precision measures quantified by the overall misclassification rate, sensitivity and specificity, and positive and negative predictive values. Specifically, under various working models we derive consistent estimators for the above measures via substitution and cross validation estimation procedures. Furthermore, we provide large sample approximations to the distributions of these nonsmooth estimators without assuming that the working model is correctly specified. Confidence intervals, for example, for the difference of the precision measures between two competing rules can then be constructed. All the proposals are illustrated with two real examples and their finite sample properties are evaluated via a simulation study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion (AIC) have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is no longer an asymptotically unbiased estimator of the Akaike information, and in fact favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that leads to the selection of any random effect not predicted to be exactly zero. We derive an analytic representation of a corrected version of the conditional AIC, which avoids the high computational cost and imprecision of available numerical approximations. An implementation in an R package is provided. All theoretical results are illustrated in simulation studies, and their impact in practice is investigated in an analysis of childhood malnutrition in Zambia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Previous meta-analyses described moderate to large benefits of chondroitin in patients with osteoarthritis. However, recent large-scale trials did not find evidence of an effect. PURPOSE: To determine the effects of chondroitin on pain in patients with osteoarthritis. DATA SOURCES: The authors searched the Cochrane Central Register of Controlled Trials (1970 to 2006), MEDLINE (1966 to 2006), EMBASE (1980 to 2006), CINAHL (1970 to 2006), and conference proceedings; checked reference lists; and contacted authors. The last update of searches was performed on 30 November 2006. STUDY SELECTION: Studies were included if they were randomized or quasi-randomized, controlled trials that compared chondroitin with placebo or with no treatment in patients with osteoarthritis of the knee or hip. There were no language restrictions. DATA EXTRACTION: The authors extracted data in duplicate. Effect sizes were calculated from the differences in means of pain-related outcomes between treatment and control groups at the end of the trial, divided by the pooled SD. Trials were combined by using random-effects meta-analysis. DATA SYNTHESIS: 20 trials (3846 patients) contributed to the meta-analysis, which revealed a high degree of heterogeneity among the trials (I2 = 92%). Small trials, trials with unclear concealment of allocation, and trials that were not analyzed according to the intention-to-treat principle showed larger effects in favor of chondroitin than did the remaining trials. When the authors restricted the analysis to the 3 trials with large sample sizes and an intention-to-treat analysis, 40% of patients were included. This resulted in an effect size of -0.03 (95% CI, -0.13 to 0.07; I2 = 0%) and corresponded to a difference of 0.6 mm on a 10-cm visual analogue scale. A meta-analysis of 12 trials showed a pooled relative risk of 0.99 (CI, 0.76 to 1.31) for any adverse event. LIMITATIONS: For 9 trials, the authors had to use approximations to calculate effect sizes. Trial quality was generally low, heterogeneity among the trials made initial interpretation of results difficult, and exploring sources of heterogeneity in meta-regression and stratified analyses may be unreliable. CONCLUSIONS: Large-scale, methodologically sound trials indicate that the symptomatic benefit of chondroitin is minimal or nonexistent. Use of chondroitin in routine clinical practice should therefore be discouraged.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suppose that one observes pairs (x1,Y1), (x2,Y2), ..., (xn,Yn), where x1 < x2 < ... < xn are fixed numbers while Y1, Y2, ..., Yn are independent random variables with unknown distributions. The only assumption is that Median(Yi) = f(xi) for some unknown convex or concave function f. We present a confidence band for this regression function f using suitable multiscale sign tests. While the exact computation of this band seems to require O(n4) steps, good approximations can be obtained in O(n2) steps. In addition the confidence band is shown to have desirable asymptotic properties as the sample size n tends to infinity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of approximating the 3D scan of a real object through an affine combination of examples. Common approaches depend either on the explicit estimation of point-to-point correspondences or on 2-dimensional projections of the target mesh; both present drawbacks. We follow an approach similar to [IF03] by representing the target via an implicit function, whose values at the vertices of the approximation are used to define a robust cost function. The problem is approached in two steps, by approximating first a coarse implicit representation of the whole target, and then finer, local ones; the local approximations are then merged together with a Poisson-based method. We report the results of applying our method on a subset of 3D scans from the Face Recognition Grand Challenge v.1.0.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T2.33TC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context: Information currently available on the trafficking of minors in the U.S. for commercial sexual exploitation includes approximations of the numbers involved, risk factors that increase the likelihood of victimization and methods of recruitment and control. However, specific characteristics about this vulnerable population remain largely unknown. Objective: This article has two distinct purposes. The first is to provide the reader with an overview of available information on minor sex trafficking in the U.S. The second is to present findings and discuss policy, research, and educational implications from secondary data analysis of 115 cases of minor sex trafficking in the U.S. Design: Minor sex trafficking cases were identified through two main venues - a review of U.S. Department of Justice press releases of human trafficking cases and an online search of media reports. Searches covered the time period from October 28, 2000, which coincided with the passage of the VTVPA through October 31, 2009. Cases were included in analysis if the incident involved at least one victim under the age of 18, occurred in the U.S., and at least one perpetrator had been arrested, indicted, or convicted. Results: A total of 115 separate incidents involving at least 153 victims were located. These occurrences involved 215 perpetrators, with the majority of them having been convicted (n = 117, 53.4%), The number of victims involved in a single incident ranged from 1 to 9. Over 90% of victims were female who ranged in age from 5 to 17 years. There were more U.S. minor victims than those from other countries. Victims had been in captivity from less than 6 months to 5 years. Minors most commonly fell into exploitation through some type of false promise indicated (16.3%, n = 25), followed by kidnapping (9.8%, n = 15). Over a fifth of the sample (22.2%, n = 34) were abused through two commercial sex practices, with almost all (94.1%, n = 144) used in prostitution. One of every five victims (24.8%, n = 38) had been advertised on an Internet website. Conclusions: Results of a review of known information about minor sex trafficking and findings from analysis of 115 incidents of the sex trafficking of youth in the U.S. indicate a need for stronger legislation to educate various professional groups, more comprehensive services for victims, stricter laws for pimps and traffickers, and preventive educational interventions beginning at a young age.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Historical information is always relevant for clinical trial design. Additionally, if incorporated in the analysis of a new trial, historical data allow to reduce the number of subjects. This decreases costs and trial duration, facilitates recruitment, and may be more ethical. Yet, under prior-data conflict, a too optimistic use of historical data may be inappropriate. We address this challenge by deriving a Bayesian meta-analytic-predictive prior from historical data, which is then combined with the new data. This prospective approach is equivalent to a meta-analytic-combined analysis of historical and new data if parameters are exchangeable across trials. The prospective Bayesian version requires a good approximation of the meta-analytic-predictive prior, which is not available analytically. We propose two- or three-component mixtures of standard priors, which allow for good approximations and, for the one-parameter exponential family, straightforward posterior calculations. Moreover, since one of the mixture components is usually vague, mixture priors will often be heavy-tailed and therefore robust. Further robustness and a more rapid reaction to prior-data conflicts can be achieved by adding an extra weakly-informative mixture component. Use of historical prior information is particularly attractive for adaptive trials, as the randomization ratio can then be changed in case of prior-data conflict. Both frequentist operating characteristics and posterior summaries for various data scenarios show that these designs have desirable properties. We illustrate the methodology for a phase II proof-of-concept trial with historical controls from four studies. Robust meta-analytic-predictive priors alleviate prior-data conflicts ' they should encourage better and more frequent use of historical data in clinical trials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rates for lepton number washout in extensions of the Standard Model containing right-handed neutrinos are key ingredients in scenarios for baryogenesis through leptogenesis. We relate these rates to real-time correlation functions at finite temperature, without making use of any particle approximations. The relations are valid to quadratic order in neutrino Yukawa couplings and to all orders in Standard Model couplings. They take into account all spectator processes, and apply both in the symmetric and in the Higgs phase of the electroweak theory. We use the relations to compute washout rates at next-to-leading order in g, where g denotes a Standard Model gauge or Yukawa coupling, both in the non-relativistic and in the relativistic regime. Even in the non-relativistic regime the parametrically dominant radiative corrections are only suppressed by a single power of g. In the non-relativistic regime radiative corrections increase the washout rate by a few percent at high temperatures, but they are of order unity around the weak scale and in the relativistic regime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a novel approach for the reconstruction of spectra from Euclidean correlator data that makes close contact to modern Bayesian concepts. It is based upon an axiomatically justified dimensionless prior distribution, which in the case of constant prior function m(ω) only imprints smoothness on the reconstructed spectrum. In addition we are able to analytically integrate out the only relevant overall hyper-parameter α in the prior, removing the necessity for Gaussian approximations found e.g. in the Maximum Entropy Method. Using a quasi-Newton minimizer and high-precision arithmetic, we are then able to find the unique global extremum of P[ρ|D] in the full Nω » Nτ dimensional search space. The method actually yields gradually improving reconstruction results if the quality of the supplied input data increases, without introducing artificial peak structures, often encountered in the MEM. To support these statements we present mock data analyses for the case of zero width delta peaks and more realistic scenarios, based on the perturbative Euclidean Wilson Loop as well as the Wilson Line correlator in Coulomb gauge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a Z3 symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general Z3 violating (denoted as ) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the NMSSM mode of the program, detailing the approximations and conventions used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We define a rank function for formulae of the propositional modal μ-calculus such that the rank of a fixed point is strictly bigger than the rank of any of its finite approximations. A rank function of this kind is needed, for instance, to establish the collapse of the modal μ-hierarchy over transitive transition systems. We show that the range of the rank function is ωω. Further we establish that the rank is computable by primitive recursion, which gives us a uniform method to generate formulae of arbitrary rank below ωω.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.