953 resultados para Quasi-Monte Carlo Methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a novel method to calculate the electronic Density of States (DOS) of a two dimensional disordered binary alloy. The method is highly reliable and numerically efficient, and Short Range Order (SRO) correlations can be included with no extra computational cost. The approach devised rests on one dimensional calculations and is applied to very long stripes of finite width, the bulk regime being achieved with a relatively small number of chains in the disordered case. Our approach is exact for the pure case and predicts the correct DOS structure in important limits, such as the segregated, random, and ordered alloy regimes. We also suggest important extensions of the present work. © 1995.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A complete analysis of the sensitivity to new Hbb̄ couplings from the process e+e- → bb̄vv̄ at the next generation of linear colliders was performed. These new couplings were predicted by many extensions of the Standard Model. The results are comparable to the study performed previously where a global fit analysis for L=500 fb-1 and √s=500 GeV resulted in a relative accuracy of 2.2% in the gHbb Yukawa coupling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A data set of a commercial Nellore beef cattle selection program was used to compare breeding models that assumed or not markers effects to estimate the breeding values, when a reduced number of animals have phenotypic, genotypic and pedigree information available. This herd complete data set was composed of 83,404 animals measured for weaning weight (WW), post-weaning gain (PWG), scrotal circumference (SC) and muscle score (MS), corresponding to 116,652 animals in the relationship matrix. Single trait analyses were performed by MTDFREML software to estimate fixed and random effects solutions using this complete data. The additive effects estimated were assumed as the reference breeding values for those animals. The individual observed phenotype of each trait was adjusted for fixed and random effects solutions, except for direct additive effects. The adjusted phenotype composed of the additive and residual parts of observed phenotype was used as dependent variable for models' comparison. Among all measured animals of this herd, only 3160 animals were genotyped for 106 SNP markers. Three models were compared in terms of changes on animals' rank, global fit and predictive ability. Model 1 included only polygenic effects, model 2 included only markers effects and model 3 included both polygenic and markers effects. Bayesian inference via Markov chain Monte Carlo methods performed by TM software was used to analyze the data for model comparison. Two different priors were adopted for markers effects in models 2 and 3, the first prior assumed was a uniform distribution (U) and, as a second prior, was assumed that markers effects were distributed as normal (N). Higher rank correlation coefficients were observed for models 3_U and 3_N, indicating a greater similarity of these models animals' rank and the rank based on the reference breeding values. Model 3_N presented a better global fit, as demonstrated by its low DIC. The best models in terms of predictive ability were models 1 and 3_N. Differences due prior assumed to markers effects in models 2 and 3 could be attributed to the better ability of normal prior in handle with collinear effects. The models 2_U and 2_N presented the worst performance, indicating that this small set of markers should not be used to genetically evaluate animals with no data, since its predictive ability is restricted. In conclusion, model 3_N presented a slight superiority when a reduce number of animals have phenotypic, genotypic and pedigree information. It could be attributed to the variation retained by markers and polygenic effects assumed together and the normal prior assumed to markers effects, that deals better with the collinearity between markers. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we present techniques that can be used to speed up the calculation of perturbative matrix elements for observables with many legs ($n = 3, 4, 5, 6, 7, ldots$). We investigate several ways to achieve this, including the use of Monte Carlo methods, the leading-color approximation, numerically less precise but faster operations, and SSE-vectorization. An important idea is the use of enquote{random polarizations} for which we derive subtraction terms for the real corrections in next-to-leading order calculations. We present the effectiveness of all these methods in the context of electron-positron scattering to $n$ jets, $n$ ranging from two to seven.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geostatistics involves the fitting of spatially continuous models to spatially discrete data (Chil`es and Delfiner, 1999). Preferential sampling arises when the process that determines the data-locations and the process being modelled are stochastically dependent. Conventional geostatistical methods assume, if only implicitly, that sampling is non-preferential. However, these methods are often used in situations where sampling is likely to be preferential. For example, in mineral exploration samples may be concentrated in areas thought likely to yield high-grade ore. We give a general expression for the likelihood function of preferentially sampled geostatistical data and describe how this can be evaluated approximately using Monte Carlo methods. We present a model for preferential sampling, and demonstrate through simulated examples that ignoring preferential sampling can lead to seriously misleading inferences. We describe an application of the model to a set of bio-monitoring data from Galicia, northern Spain, in which making allowance for preferential sampling materially changes the inferences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current standard treatment for head and neck cancer at our institution uses intensity-modulated x-ray therapy (IMRT), which improves target coverage and sparing of critical structures by delivering complex fluence patterns from a variety of beam directions to conform dose distributions to the shape of the target volume. The standard treatment for breast patients is field-in-field forward-planned IMRT, with initial tangential fields and additional reduced-weight tangents with blocking to minimize hot spots. For these treatment sites, the addition of electrons has the potential of improving target coverage and sparing of critical structures due to rapid dose falloff with depth and reduced exit dose. In this work, the use of mixed-beam therapy (MBT), i.e., combined intensity-modulated electron and x-ray beams using the x-ray multi-leaf collimator (MLC), was explored. The hypothesis of this study was that addition of intensity-modulated electron beams to existing clinical IMRT plans would produce MBT plans that were superior to the original IMRT plans for at least 50% of selected head and neck and 50% of breast cases. Dose calculations for electron beams collimated by the MLC were performed with Monte Carlo methods. An automation system was created to facilitate communication between the dose calculation engine and the treatment planning system. Energy and intensity modulation of the electron beams was accomplished by dividing the electron beams into 2x2-cm2 beamlets, which were then beam-weight optimized along with intensity-modulated x-ray beams. Treatment plans were optimized to obtain equivalent target dose coverage, and then compared with the original treatment plans. MBT treatment plans were evaluated by participating physicians with respect to target coverage, normal structure dose, and overall plan quality in comparison with original clinical plans. The physician evaluations did not support the hypothesis for either site, with MBT selected as superior in 1 out of the 15 head and neck cases (p=1) and 6 out of 18 breast cases (p=0.95). While MBT was not shown to be superior to IMRT, reductions were observed in doses to critical structures distal to the target along the electron beam direction and to non-target tissues, at the expense of target coverage and dose homogeneity. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multivariate frailty hazard model is developed for joint-modeling of three correlated time-to-event outcomes: (1) local recurrence, (2) distant recurrence, and (3) overall survival. The term frailty is introduced to model population heterogeneity. The dependence is modeled by conditioning on a shared frailty that is included in the three hazard functions. Independent variables can be included in the model as covariates. The Markov chain Monte Carlo methods are used to estimate the posterior distributions of model parameters. The algorithm used in present application is the hybrid Metropolis-Hastings algorithm, which simultaneously updates all parameters with evaluations of gradient of log posterior density. The performance of this approach is examined based on simulation studies using Exponential and Weibull distributions. We apply the proposed methods to a study of patients with soft tissue sarcoma, which motivated this research. Our results indicate that patients with chemotherapy had better overall survival with hazard ratio of 0.242 (95% CI: 0.094 - 0.564) and lower risk of distant recurrence with hazard ratio of 0.636 (95% CI: 0.487 - 0.860), but not significantly better in local recurrence with hazard ratio of 0.799 (95% CI: 0.575 - 1.054). The advantages and limitations of the proposed models, and future research directions are discussed. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Artificial Neural Networks still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning ANN parameters. In recent years the use of hybrid technologies, combining Artificial Neural Networks and Genetic Algorithms, has been utilized to. In this work, several ANN topologies were trained and tested using Artificial Neural Networks and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The neutronics hall of the Nuclear Engineering Department at the Polytechnical University of Madrid has been characterized. The neutron spectra and the ambient dose equivalent produced by an 241AmBe source were measured at various source-to-detector distances on the new bench. Using Monte Carlo methods a detailed model of the neutronics hall was designed, and neutron spectra and the ambient dose equivalent were calculated at the same locations where measurements were carried out. A good agreement between measured and calculated values was found.