21 resultados para Méthode de Monte-Carlo par chaînes de Markov
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IFEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this work, a Monte Carlo code was used to investigate the performance of different x-ray spectra in digital mammography, through a figure of merit (FOM), defined as FOM = CNR2/(D) over bar (g), with CNR being the contrast-to-noise ratio in image and (D) over bar (g) being the average glandular dose. The FOM was studied for breasts with different thicknesses t (2 cm <= t <= 8 cm) and glandular contents (25%, 50% and 75% glandularity). The anode/filter combinations evaluated were those traditionally employed in mammography (Mo/Mo, Mo/Rh, Rh/Rh), and a W anode combined with Al or K-edge filters (Zr, Mo, Rh, Pd, Ag, Cd, Sn), for tube potentials between 22 and 34 kVp. Results show that the W anode combined with K-edge filters provides higher values of FOM for all breast thicknesses investigated. Nevertheless, the most suitable filter and tube potential depend on the breast thickness, and for t >= 6 cm, they also depend on breast glandularity. Particularly for thick and dense breasts, a W anode combined with K-edge filters can greatly improve the digital technique, with the values of FOM up to 200% greater than that obtained with the anode/filter combinations and tube potentials traditionally employed in mammography. For breasts with t < 4 cm, a general good performance was obtained with the W anode combined with 60 mu m of the Mo filter at 24-25 kVp, while 60 mu m of the Pd filter provided a general good performance at 24-26 kVp for t = 4 cm, and at 28-30 and 29-31 kVp for t = 6 and 8 cm, respectively.
Resumo:
Using fixed node diffusion quantum Monte Carlo (FN-DMC) simulations and density functional theory (DFT) within the generalized gradient approximations, we calculate the total energies of the relaxed and unrelaxed neutral, cationic, and anionic aluminum clusters, Al-n (n = 1-13). From the obtained total energies, we extract the ionization potential and electron detachment energy and compare with previous theoretical and experimental results. Our results for the electronic properties from both the FN-DMC and DFT calculations are in reasonably good agreement with the available experimental data. A comparison between the FN-DMC and DFT results reveals that their differences are a few tenths of electron volt for both the ionization potential and the electron detachment energy. We also observe two distinct behaviors in the electron correlation contribution to the total energies from smaller to larger clusters, which could be assigned to the structural transition of the clusters from planar to three-dimensional occurring at n = 4 to 5.
Resumo:
The extension of Boltzmann-Gibbs thermostatistics, proposed by Tsallis, introduces an additional parameter q to the inverse temperature beta. Here, we show that a previously introduced generalized Metropolis dynamics to evolve spin models is not local and does not obey the detailed energy balance. In this dynamics, locality is only retrieved for q = 1, which corresponds to the standard Metropolis algorithm. Nonlocality implies very time-consuming computer calculations, since the energy of the whole system must be reevaluated when a single spin is flipped. To circumvent this costly calculation, we propose a generalized master equation, which gives rise to a local generalized Metropolis dynamics that obeys the detailed energy balance. To compare the different critical values obtained with other generalized dynamics, we perform Monte Carlo simulations in equilibrium for the Ising model. By using short-time nonequilibrium numerical simulations, we also calculate for this model the critical temperature and the static and dynamical critical exponents as functions of q. Even for q not equal 1, we show that suitable time-evolving power laws can be found for each initial condition. Our numerical experiments corroborate the literature results when we use nonlocal dynamics, showing that short-time parameter determination works also in this case. However, the dynamics governed by the new master equation leads to different results for critical temperatures and also the critical exponents affecting universality classes. We further propose a simple algorithm to optimize modeling the time evolution with a power law, considering in a log-log plot two successive refinements.
Resumo:
A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV. which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when lhe manufacturer parameters of lhe detector were used in lhe simulation. A complete Computerized Tomagraphy (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.
Resumo:
In this paper we propose a hybrid hazard regression model with threshold stress which includes the proportional hazards and the accelerated failure time models as particular cases. To express the behavior of lifetimes the generalized-gamma distribution is assumed and an inverse power law model with a threshold stress is considered. For parameter estimation we develop a sampling-based posterior inference procedure based on Markov Chain Monte Carlo techniques. We assume proper but vague priors for the parameters of interest. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumption of vague priors. Further, some discussions on model selection criteria are given. The methodology is illustrated on simulated and real lifetime data set.
Resumo:
Hepatitis C virus (HCV) is a public health problem throughout the world and 3% of the world population is infected with this virus. It is estimated that 3-4 millions individuals are being infected every year. It has been estimated that around 1.5% of Brazilian population is anti-HCV positive and the Northeast region showed the highest prevalence in Brazil. The aim of this study was to characterize HCV genotypes circulating in Pernambuco State (PE), Brazil, located in the Northeast region of the country. This study included 85 anti-HCV positive patients followed up between 2004 and 2011. For genotyping, a 380bp fragment of HCV RNA in the NS5B region was amplified by nested PCR. Phylogenetic analysis was conducted using Bayesian Markov chain Monte Carlo simulation (MCMC) using BEAST v.1.5.3. From 85 samples, 63 (74.1%) positive to NS5B fragment were successfully sequenced. Subtype 1b was the most prevalent in this population (42-66.7%), followed by 3a (16-25.4%), 1a (4-6.3%) and 2b (1-1.6%). Twelve (63.1%) and seven (36.9%) patients with HCV and schistosomiasis were infected with subtypes 1b and 3a, respectively. Brazil is a large country with many different population backgrounds; a large variation in the frequencies of HCV genotypes is predictable throughout its territory. This study reports HCV genotypes from Pernambuco State where subtype 1b was found to be the most prevalent. Phylogenetic analysis suggests the presence of the different HCV strains circulating within this population. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for the right-censored survival data when immune or cured individuals may be present in the population from which the data is taken. In our approach the number of competing causes of the event of interest follows the Conway-Maxwell-Poisson distribution which generalizes the Poisson distribution. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the proposed model. Also, some discussions on the model selection and an illustration with a real data set are considered.
Resumo:
A data set of a commercial Nellore beef cattle selection program was used to compare breeding models that assumed or not markers effects to estimate the breeding values, when a reduced number of animals have phenotypic, genotypic and pedigree information available. This herd complete data set was composed of 83,404 animals measured for weaning weight (WW), post-weaning gain (PWG), scrotal circumference (SC) and muscle score (MS), corresponding to 116,652 animals in the relationship matrix. Single trait analyses were performed by MTDFREML software to estimate fixed and random effects solutions using this complete data. The additive effects estimated were assumed as the reference breeding values for those animals. The individual observed phenotype of each trait was adjusted for fixed and random effects solutions, except for direct additive effects. The adjusted phenotype composed of the additive and residual parts of observed phenotype was used as dependent variable for models' comparison. Among all measured animals of this herd, only 3160 animals were genotyped for 106 SNP markers. Three models were compared in terms of changes on animals' rank, global fit and predictive ability. Model 1 included only polygenic effects, model 2 included only markers effects and model 3 included both polygenic and markers effects. Bayesian inference via Markov chain Monte Carlo methods performed by TM software was used to analyze the data for model comparison. Two different priors were adopted for markers effects in models 2 and 3, the first prior assumed was a uniform distribution (U) and, as a second prior, was assumed that markers effects were distributed as normal (N). Higher rank correlation coefficients were observed for models 3_U and 3_N, indicating a greater similarity of these models animals' rank and the rank based on the reference breeding values. Model 3_N presented a better global fit, as demonstrated by its low DIC. The best models in terms of predictive ability were models 1 and 3_N. Differences due prior assumed to markers effects in models 2 and 3 could be attributed to the better ability of normal prior in handle with collinear effects. The models 2_U and 2_N presented the worst performance, indicating that this small set of markers should not be used to genetically evaluate animals with no data, since its predictive ability is restricted. In conclusion, model 3_N presented a slight superiority when a reduce number of animals have phenotypic, genotypic and pedigree information. It could be attributed to the variation retained by markers and polygenic effects assumed together and the normal prior assumed to markers effects, that deals better with the collinearity between markers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper we use Markov chain Monte Carlo (MCMC) methods in order to estimate and compare GARCH models from a Bayesian perspective. We allow for possibly heavy tailed and asymmetric distributions in the error term. We use a general method proposed in the literature to introduce skewness into a continuous unimodal and symmetric distribution. For each model we compute an approximation to the marginal likelihood, based on the MCMC output. From these approximations we compute Bayes factors and posterior model probabilities. (C) 2012 IMACS. Published by Elsevier B.V. All rights reserved.
Weibull and generalised exponential overdispersion models with an application to ozone air pollution
Resumo:
We consider the problem of estimating the mean and variance of the time between occurrences of an event of interest (inter-occurrences times) where some forms of dependence between two consecutive time intervals are allowed. Two basic density functions are taken into account. They are the Weibull and the generalised exponential density functions. In order to capture the dependence between two consecutive inter-occurrences times, we assume that either the shape and/or the scale parameters of the two density functions are given by auto-regressive models. The expressions for the mean and variance of the inter-occurrences times are presented. The models are applied to the ozone data from two regions of Mexico City. The estimation of the parameters is performed using a Bayesian point of view via Markov chain Monte Carlo (MCMC) methods.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.
Resumo:
We revisit the issue of the constancy of the dark matter (DM) and baryonic Newtonian acceleration scales within the DM scale radius by considering a large sample of late-type galaxies. We rely on a Markov Chain Monte Carlo method to estimate the parameters of the halo model and the stellar mass-to-light ratio and then propagate the uncertainties from the rotation curve data to the estimate of the acceleration scales. This procedure allows us to compile a catalogue of 58 objects with estimated values of the B-band absolute magnitude M-B, the virial mass M-vir, and the DM and baryonic Newtonian accelerations (denoted as g(DM)(r(0)) and g(bar)(r(0)), respectively) within the scale radius r(0) which we use to investigate whether it is possible to define a universal acceleration scale. We find a weak but statistically meaningful correlation with M-vir thus making us argue against the universality of the acceleration scales. However, the results somewhat depend on the sample adopted so that a careful analysis of selection effects should be carried out before any definitive conclusion can be drawn.
Resumo:
Aims. We report the discovery of CoRoT-16b, a low density hot jupiter that orbits a faint G5V star (mV = 15.63) in 5.3523 +/- 0.0002 days with slight eccentricity. A fit of the data with no a priori assumptions on the orbit leads to an eccentricity of 0.33 +/- 0.1. We discuss this value and also derive the mass and radius of the planet. Methods. We analyse the photometric transit curve of CoRoT-16 given by the CoRoT satellite, and radial velocity data from the HARPS and HIRES spectrometers. A combined analysis using a Markov chain Monte Carlo algorithm is used to get the system parameters. Results. CoRoT-16b is a 0.535 -0.083/+0.085 M-J, 1.17 -0.14/+0.16 R-J hot Jupiter with a density of 0.44 -0.14/+0.21 g cm(-3). Despite its short orbital distance (0.0618 +/- 0.0015 AU) and the age of the parent star (6.73 +/- 2.8 Gyr), the planet orbit exhibits significantly non-zero eccentricity. This is very uncommon for this type of objects as tidal effects tend to circularise the orbit. This value is discussed taking into account the characteristics of the star and the observation accuracy.