982 resultados para Monte-Carlo analysis
Resumo:
Two versions of the threshold contact process ordinary and conservative - are studied on a square lattice. In the first, particles are created on active sites, those having at least two nearest neighbor sites occupied, and are annihilated spontaneously. In the conservative version, a particle jumps from its site to an active site. Mean-field analysis suggests the existence of a first-order phase transition, which is confirmed by Monte Carlo simulations. In the thermodynamic limit, the two versions are found to give the same results. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model (beta-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z is an element of [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high-and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high-and low-substructure level clusters) are different (they present an offset, i. e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.
Resumo:
Lemonte and Cordeiro [Birnbaum-Saunders nonlinear regression models, Comput. Stat. Data Anal. 53 (2009), pp. 4441-4452] introduced a class of Birnbaum-Saunders (BS) nonlinear regression models potentially useful in lifetime data analysis. We give a general matrix Bartlett correction formula to improve the likelihood ratio (LR) tests in these models. The formula is simple enough to be used analytically to obtain several closed-form expressions in special cases. Our results generalize those in Lemonte et al. [Improved likelihood inference in Birnbaum-Saunders regressions, Comput. Stat. DataAnal. 54 (2010), pp. 1307-1316], which hold only for the BS linear regression models. We consider Monte Carlo simulations to show that the corrected tests work better than the usual LR tests.
Resumo:
Within the nutritional context, the supplementation of microminerals in bird food is often made in quantities exceeding those required in the attempt to ensure the proper performance of the animals. The experiments of type dosage x response are very common in the determination of levels of nutrients in optimal food balance and include the use of regression models to achieve this objective. Nevertheless, the regression analysis routine, generally, uses a priori information about a possible relationship between the response variable. The isotonic regression is a method of estimation by least squares that generates estimates which preserves data ordering. In the theory of isotonic regression this information is essential and it is expected to increase fitting efficiency. The objective of this work was to use an isotonic regression methodology, as an alternative way of analyzing data of Zn deposition in tibia of male birds of Hubbard lineage. We considered the models of plateau response of polynomial quadratic and linear exponential forms. In addition to these models, we also proposed the fitting of a logarithmic model to the data and the efficiency of the methodology was evaluated by Monte Carlo simulations, considering different scenarios for the parametric values. The isotonization of the data yielded an improvement in all the fitting quality parameters evaluated. Among the models used, the logarithmic presented estimates of the parameters more consistent with the values reported in literature.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
We have performed multicanonical simulations to study the critical behavior of the two-dimensional Ising model with dipole interactions. This study concerns the thermodynamic phase transitions in the range of the interaction delta where the phase characterized by striped configurations of width h = 1 is observed. Controversial results obtained from local update algorithms have been reported for this region, including the claimed existence of a second-order phase transition line that becomes first order above a tricritical point located somewhere between delta = 0.85 and 1. Our analysis relies on the complex partition function zeros obtained with high statistics from multicanonical simulations. Finite size scaling relations for the leading partition function zeros yield critical exponents. that are clearly consistent with a single second-order phase transition line, thus excluding such a tricritical point in that region of the phase diagram. This conclusion is further supported by analysis of the specific heat and susceptibility of the orientational order parameter.
Resumo:
In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.
Resumo:
We revisit the issue of the constancy of the dark matter (DM) and baryonic Newtonian acceleration scales within the DM scale radius by considering a large sample of late-type galaxies. We rely on a Markov Chain Monte Carlo method to estimate the parameters of the halo model and the stellar mass-to-light ratio and then propagate the uncertainties from the rotation curve data to the estimate of the acceleration scales. This procedure allows us to compile a catalogue of 58 objects with estimated values of the B-band absolute magnitude M-B, the virial mass M-vir, and the DM and baryonic Newtonian accelerations (denoted as g(DM)(r(0)) and g(bar)(r(0)), respectively) within the scale radius r(0) which we use to investigate whether it is possible to define a universal acceleration scale. We find a weak but statistically meaningful correlation with M-vir thus making us argue against the universality of the acceleration scales. However, the results somewhat depend on the sample adopted so that a careful analysis of selection effects should be carried out before any definitive conclusion can be drawn.
Resumo:
Aims. We report the discovery of CoRoT-16b, a low density hot jupiter that orbits a faint G5V star (mV = 15.63) in 5.3523 +/- 0.0002 days with slight eccentricity. A fit of the data with no a priori assumptions on the orbit leads to an eccentricity of 0.33 +/- 0.1. We discuss this value and also derive the mass and radius of the planet. Methods. We analyse the photometric transit curve of CoRoT-16 given by the CoRoT satellite, and radial velocity data from the HARPS and HIRES spectrometers. A combined analysis using a Markov chain Monte Carlo algorithm is used to get the system parameters. Results. CoRoT-16b is a 0.535 -0.083/+0.085 M-J, 1.17 -0.14/+0.16 R-J hot Jupiter with a density of 0.44 -0.14/+0.21 g cm(-3). Despite its short orbital distance (0.0618 +/- 0.0015 AU) and the age of the parent star (6.73 +/- 2.8 Gyr), the planet orbit exhibits significantly non-zero eccentricity. This is very uncommon for this type of objects as tidal effects tend to circularise the orbit. This value is discussed taking into account the characteristics of the star and the observation accuracy.
Resumo:
The hydration of mesityl oxide (MOx) was investigated through a sequential quantum mechanics/molecular mechanics approach. Emphasis was placed on the analysis of the role played by water in the MOx syn-anti equilibrium and the electronic absorption spectrum. Results for the structure of the MOx-water solution, free energy of solvation and polarization effects are also reported. Our main conclusion was that in gas-phase and in low-polarity solvents, the MOx exists dominantly in syn-form and in aqueous solution in anti-form. This conclusion was supported by Gibbs free energy calculations in gas phase and in-water by quantum mechanical calculations with polarizable continuum model and thermodynamic perturbation theory in Monte Carlo simulations using a polarized MOx model. The consideration of the in-water polarization of the MOx is very important to correctly describe the solute-solvent electrostatic interaction. Our best estimate for the shift of the pi-pi* transition energy of MOx, when it changes from gas-phase to water solvent, shows a red-shift of -2,520 +/- 90 cm(-1), which is only 110 cm(-1) (0.014 eV) below the experimental extrapolation of -2,410 +/- 90 cm(-1). This red-shift of around -2,500 cm(-1) can be divided in two distinct and opposite contributions. One contribution is related to the syn -> anti conformational change leading to a blue-shift of similar to 1,700 cm(-1). Other contribution is the solvent effect on the electronic structure of the MOx leading to a red-shift of around -4,200 cm(-1). Additionally, this red-shift caused by the solvent effect on the electronic structure can by composed by approximately 60 % due to the electrostatic bulk effect, 10 % due to the explicit inclusion of the hydrogen-bonded water molecules and 30 % due to the explicit inclusion of the nearest water molecules.
Resumo:
The ability to entrap drugs within vehicles and subsequently release them has led to new treatments for a number of diseases. Based on an associative phase separation and interfacial diffusion approach, we developed a way to prepare DNA gel particles without adding any kind of cross-linker or organic solvent. Among the various agents studied, cationic surfactants offered particularly efficient control for encapsulation and DNA release from these DNA gel particles. The driving force for this strong association is the electrostatic interaction between the two components, as induced by the entropic increase due to the release of the respective counter-ions. However, little is known about the influence of the respective counter-ions on this surfactant-DNA interaction. Here we examined the effect of different counter-ions on the formation and properties of the DNA gel particles by mixing DNA (either single-(ssDNA) or double-stranded (dsDNA)) with the single chain surfactant dodecyltrimethylammonium (DTA). In particular, we used as counter-ions of this surfactant the hydrogen sulfate and trifluoromethane sulfonate anions and the two halides, chloride and bromide. Effects on the morphology of the particles obtained, the encapsulation of DNA and its release, as well as the haemocompatibility of these particles are presented, using counter-ion structure and DNA conformation as controlling parameters. Analysis of the data indicates that the degree of counter-ion dissociation from the surfactant micelles and the polar/hydrophobic character of the counter-ion are important parameters in the final properties of the particles. The stronger interaction with amphiphiles for ssDNA than for dsDNA suggests the important role of hydrophobic interactions in DNA.
Resumo:
The study of proportions is a common topic in many fields of study. The standard beta distribution or the inflated beta distribution may be a reasonable choice to fit a proportion in most situations. However, they do not fit well variables that do not assume values in the open interval (0, c), 0 < c < 1. For these variables, the authors introduce the truncated inflated beta distribution (TBEINF). This proposed distribution is a mixture of the beta distribution bounded in the open interval (c, 1) and the trinomial distribution. The authors present the moments of the distribution, its scoring vector, and Fisher information matrix, and discuss estimation of its parameters. The properties of the suggested estimators are studied using Monte Carlo simulation. In addition, the authors present an application of the TBEINF distribution for unemployment insurance data.
Resumo:
Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.
Resumo:
Liquid configurations generated by Metropolis Monte Carlo simulations are used in time-dependent density functional theory calculations of the spectral line shifts and line profiles of the lowest lying excitation of the alkaline earth atoms, Be, Mg, Ca, Sr and Ba embedded in liquid helium. The results are in very good agreement with the available experimental data. Special attention is given to the calculated spectroscopic shift and the associated line broadening. The analysis specifies the inhomogeneous broadening of the three separate contributions due to the splitting of the s -> p transition of the alkaline earth atom in the liquid environment. (C) 2012 Elsevier B. V. All rights reserved.
Resumo:
In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis - latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.