28 resultados para MONTE-CARLO METHODS
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IFEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this work, a Monte Carlo code was used to investigate the performance of different x-ray spectra in digital mammography, through a figure of merit (FOM), defined as FOM = CNR2/(D) over bar (g), with CNR being the contrast-to-noise ratio in image and (D) over bar (g) being the average glandular dose. The FOM was studied for breasts with different thicknesses t (2 cm <= t <= 8 cm) and glandular contents (25%, 50% and 75% glandularity). The anode/filter combinations evaluated were those traditionally employed in mammography (Mo/Mo, Mo/Rh, Rh/Rh), and a W anode combined with Al or K-edge filters (Zr, Mo, Rh, Pd, Ag, Cd, Sn), for tube potentials between 22 and 34 kVp. Results show that the W anode combined with K-edge filters provides higher values of FOM for all breast thicknesses investigated. Nevertheless, the most suitable filter and tube potential depend on the breast thickness, and for t >= 6 cm, they also depend on breast glandularity. Particularly for thick and dense breasts, a W anode combined with K-edge filters can greatly improve the digital technique, with the values of FOM up to 200% greater than that obtained with the anode/filter combinations and tube potentials traditionally employed in mammography. For breasts with t < 4 cm, a general good performance was obtained with the W anode combined with 60 mu m of the Mo filter at 24-25 kVp, while 60 mu m of the Pd filter provided a general good performance at 24-26 kVp for t = 4 cm, and at 28-30 and 29-31 kVp for t = 6 and 8 cm, respectively.
Resumo:
Using fixed node diffusion quantum Monte Carlo (FN-DMC) simulations and density functional theory (DFT) within the generalized gradient approximations, we calculate the total energies of the relaxed and unrelaxed neutral, cationic, and anionic aluminum clusters, Al-n (n = 1-13). From the obtained total energies, we extract the ionization potential and electron detachment energy and compare with previous theoretical and experimental results. Our results for the electronic properties from both the FN-DMC and DFT calculations are in reasonably good agreement with the available experimental data. A comparison between the FN-DMC and DFT results reveals that their differences are a few tenths of electron volt for both the ionization potential and the electron detachment energy. We also observe two distinct behaviors in the electron correlation contribution to the total energies from smaller to larger clusters, which could be assigned to the structural transition of the clusters from planar to three-dimensional occurring at n = 4 to 5.
Resumo:
The extension of Boltzmann-Gibbs thermostatistics, proposed by Tsallis, introduces an additional parameter q to the inverse temperature beta. Here, we show that a previously introduced generalized Metropolis dynamics to evolve spin models is not local and does not obey the detailed energy balance. In this dynamics, locality is only retrieved for q = 1, which corresponds to the standard Metropolis algorithm. Nonlocality implies very time-consuming computer calculations, since the energy of the whole system must be reevaluated when a single spin is flipped. To circumvent this costly calculation, we propose a generalized master equation, which gives rise to a local generalized Metropolis dynamics that obeys the detailed energy balance. To compare the different critical values obtained with other generalized dynamics, we perform Monte Carlo simulations in equilibrium for the Ising model. By using short-time nonequilibrium numerical simulations, we also calculate for this model the critical temperature and the static and dynamical critical exponents as functions of q. Even for q not equal 1, we show that suitable time-evolving power laws can be found for each initial condition. Our numerical experiments corroborate the literature results when we use nonlocal dynamics, showing that short-time parameter determination works also in this case. However, the dynamics governed by the new master equation leads to different results for critical temperatures and also the critical exponents affecting universality classes. We further propose a simple algorithm to optimize modeling the time evolution with a power law, considering in a log-log plot two successive refinements.
Resumo:
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV. which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when lhe manufacturer parameters of lhe detector were used in lhe simulation. A complete Computerized Tomagraphy (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
Resumo:
A data set of a commercial Nellore beef cattle selection program was used to compare breeding models that assumed or not markers effects to estimate the breeding values, when a reduced number of animals have phenotypic, genotypic and pedigree information available. This herd complete data set was composed of 83,404 animals measured for weaning weight (WW), post-weaning gain (PWG), scrotal circumference (SC) and muscle score (MS), corresponding to 116,652 animals in the relationship matrix. Single trait analyses were performed by MTDFREML software to estimate fixed and random effects solutions using this complete data. The additive effects estimated were assumed as the reference breeding values for those animals. The individual observed phenotype of each trait was adjusted for fixed and random effects solutions, except for direct additive effects. The adjusted phenotype composed of the additive and residual parts of observed phenotype was used as dependent variable for models' comparison. Among all measured animals of this herd, only 3160 animals were genotyped for 106 SNP markers. Three models were compared in terms of changes on animals' rank, global fit and predictive ability. Model 1 included only polygenic effects, model 2 included only markers effects and model 3 included both polygenic and markers effects. Bayesian inference via Markov chain Monte Carlo methods performed by TM software was used to analyze the data for model comparison. Two different priors were adopted for markers effects in models 2 and 3, the first prior assumed was a uniform distribution (U) and, as a second prior, was assumed that markers effects were distributed as normal (N). Higher rank correlation coefficients were observed for models 3_U and 3_N, indicating a greater similarity of these models animals' rank and the rank based on the reference breeding values. Model 3_N presented a better global fit, as demonstrated by its low DIC. The best models in terms of predictive ability were models 1 and 3_N. Differences due prior assumed to markers effects in models 2 and 3 could be attributed to the better ability of normal prior in handle with collinear effects. The models 2_U and 2_N presented the worst performance, indicating that this small set of markers should not be used to genetically evaluate animals with no data, since its predictive ability is restricted. In conclusion, model 3_N presented a slight superiority when a reduce number of animals have phenotypic, genotypic and pedigree information. It could be attributed to the variation retained by markers and polygenic effects assumed together and the normal prior assumed to markers effects, that deals better with the collinearity between markers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for the right-censored survival data when immune or cured individuals may be present in the population from which the data is taken. In our approach the number of competing causes of the event of interest follows the Conway-Maxwell-Poisson distribution which generalizes the Poisson distribution. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the proposed model. Also, some discussions on the model selection and an illustration with a real data set are considered.
Resumo:
In this paper we use Markov chain Monte Carlo (MCMC) methods in order to estimate and compare GARCH models from a Bayesian perspective. We allow for possibly heavy tailed and asymmetric distributions in the error term. We use a general method proposed in the literature to introduce skewness into a continuous unimodal and symmetric distribution. For each model we compute an approximation to the marginal likelihood, based on the MCMC output. From these approximations we compute Bayes factors and posterior model probabilities. (C) 2012 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
The log-Burr XII regression model for grouped survival data is evaluated in the presence of many ties. The methodology for grouped survival data is based on life tables, where the times are grouped in k intervals, and we fit discrete lifetime regression models to the data. The model parameters are estimated by maximum likelihood and jackknife methods. To detect influential observations in the proposed model, diagnostic measures based on case deletion, so-called global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to these measures, the total local influence and influential estimates are also used. We conduct Monte Carlo simulation studies to assess the finite sample behavior of the maximum likelihood estimators of the proposed model for grouped survival. A real data set is analyzed using a regression model for grouped data.
Resumo:
Background: Exposure to fine fractions of particulate matter (PM2.5) is associated with increased hospital admissions and mortality for respiratory and cardiovascular disease in children and the elderly. This study aims to estimate the toxicological risk of PM2.5 from biomass burning in children and adolescents between the age of 6 and 14 in Tangara da Serra, a municipality of Subequatorial Brazilian Amazon. Methods: Risk assessment methodology was applied to estimate the risk quotient in two scenarios of exposure according to local seasonality. The potential dose of PM2.5 was estimated using the Monte Carlo simulation, stratifying the population by age, gender, asthma and Body Mass Index (BMI). Results: Male asthmatic children under the age of 8 at normal body rate had the highest risk quotient among the subgroups. The general potential average dose of PM2.5 was 1.95 mu g/kg.day (95% CI: 1.62 - 2.27) during the dry scenario and 0.32 mu g/kg. day (95% CI: 0.29 - 0.34) in the rainy scenario. During the dry season, children and adolescents showed a toxicological risk to PM2.5 of 2.07 mu g/kg. day (95% CI: 1.85 - 2.30). Conclusions: Children and adolescents living in the Subequatorial Brazilian Amazon region were exposed to high levels of PM2.5 resulting in toxicological risk for this multi-pollutant. The toxicological risk quotients of children in this region were comparable or higher to children living in metropolitan regions with PM2.5 air pollution above the recommended limits to human health.
Resumo:
The physical properties of small rhodium clusters, Rh-n, have been in debate due to the shortcomings of density functional theory (DFT). To help in the solution of those problems, we obtained a set of putative lowest energy structures for small Rh-n (n = 2-15) clusters employing hybrid-DFT and the generalized gradient approximation (GGA). For n = 2-6, both hybrid and GGA functionals yield similar ground-state structures (compact), however, hybrid favors compact structures for n = 7-15, while GGA favors open structures based on simple cubic motifs. Thus, experimental results are crucial to indicate the correct ground-state structures, however, we found that a unique set of structures (compact or open) is unable to explain all available experimental data. For example, the GGA structures (open) yield total magnetic moments in excellent agreement with experimental data, while hybrid structures (compact) have larger magnetic moments compared with experiments due to the increased localization of the 4d states. Thus, we would conclude that GGA provides a better description of the Rh-n clusters, however, a recent experimental-theoretical study [ Harding et al., J. Chem. Phys. 133, 214304 (2010)] found that only compact structures are able to explain experimental vibrational data, while open structures cannot. Therefore, it indicates that the study of Rh-n clusters is a challenging problem and further experimental studies are required to help in the solution of this conundrum, as well as a better description of the exchange and correlation effects on the Rh n clusters using theoretical methods such as the quantum Monte Carlo method.
Weibull and generalised exponential overdispersion models with an application to ozone air pollution
Resumo:
We consider the problem of estimating the mean and variance of the time between occurrences of an event of interest (inter-occurrences times) where some forms of dependence between two consecutive time intervals are allowed. Two basic density functions are taken into account. They are the Weibull and the generalised exponential density functions. In order to capture the dependence between two consecutive inter-occurrences times, we assume that either the shape and/or the scale parameters of the two density functions are given by auto-regressive models. The expressions for the mean and variance of the inter-occurrences times are presented. The models are applied to the ozone data from two regions of Mexico City. The estimation of the parameters is performed using a Bayesian point of view via Markov chain Monte Carlo (MCMC) methods.
Resumo:
Aims. We report the discovery of CoRoT-16b, a low density hot jupiter that orbits a faint G5V star (mV = 15.63) in 5.3523 +/- 0.0002 days with slight eccentricity. A fit of the data with no a priori assumptions on the orbit leads to an eccentricity of 0.33 +/- 0.1. We discuss this value and also derive the mass and radius of the planet. Methods. We analyse the photometric transit curve of CoRoT-16 given by the CoRoT satellite, and radial velocity data from the HARPS and HIRES spectrometers. A combined analysis using a Markov chain Monte Carlo algorithm is used to get the system parameters. Results. CoRoT-16b is a 0.535 -0.083/+0.085 M-J, 1.17 -0.14/+0.16 R-J hot Jupiter with a density of 0.44 -0.14/+0.21 g cm(-3). Despite its short orbital distance (0.0618 +/- 0.0015 AU) and the age of the parent star (6.73 +/- 2.8 Gyr), the planet orbit exhibits significantly non-zero eccentricity. This is very uncommon for this type of objects as tidal effects tend to circularise the orbit. This value is discussed taking into account the characteristics of the star and the observation accuracy.