80 resultados para Monte - Carlo study
Resumo:
We study the (K-, p) reaction on nuclei with a 1 GeV/c momentum kaon beam, paying special attention to the region of emitted protons having kinetic energy above 600 MeV, which was used to claim a deeply attractive kaon nucleus optical potential. Our model describes the nuclear reaction in the framework of a local density approach and the calculations are performed following two different procedures: one is based on a many-body method using the Lindhard function and the other is based on a Monte Carlo simulation. The simulation method offers flexibility to account for processes other than kaon quasielastic scattering, such as K- absorption by one and two nucleons, producing hyperons, and allows consideration of final-state interactions of the K-, the p, and all other primary and secondary particles on their way out of the nucleus, as well as the weak decay of the produced hyperons into pi N. We find a limited sensitivity of the cross section to the strength of the kaon optical potential. We also show a serious drawback in the experimental setup-the requirement for having, together with the energetic proton, at least one charged particle detected in the decay counter surrounding the target-as we find that the shape of the original cross section is appreciably distorted, to the point of invalidating the claims made in the experimental paper on the strength of the kaon nucleus optical.
Resumo:
Using Monte Carlo simulations we study the dynamics of three-dimensional Ising models with nearest-, next-nearest-, and four-spin (plaquette) interactions. During coarsening, such models develop growing energy barriers, which leads to very slow dynamics at low temperature. As already reported, the model with only the plaquette interaction exhibits some of the features characteristic of ordinary glasses: strong metastability of the supercooled liquid, a weak increase of the characteristic length under cooling, stretched-exponential relaxation, and aging. The addition of two-spin interactions, in general, destroys such behavior: the liquid phase loses metastability and the slow-dynamics regime terminates well below the melting transition, which is presumably related with a certain corner-rounding transition. However, for a particular choice of interaction constants, when the ground state is strongly degenerate, our simulations suggest that the slow-dynamics regime extends up to the melting transition. The analysis of these models leads us to the conjecture that in the four-spin Ising model domain walls lose their tension at the glassy transition and that they are basically tensionless in the glassy phase.
Resumo:
We propose a short-range generalization of the p-spin interaction spin-glass model. The model is well suited to test the idea that an entropy collapse is at the bottom line of the dynamical singularity encountered in structural glasses. The model is studied in three dimensions through Monte Carlo simulations, which put in evidence fragile glass behavior with stretched exponential relaxation and super-Arrhenius behavior of the relaxation time. Our data are in favor of a Vogel-Fulcher behavior of the relaxation time, related to an entropy collapse at the Kauzmann temperature. We, however, encounter difficulties analogous to those found in experimental systems when extrapolating thermodynamical data at low temperatures. We study the spin-glass susceptibility, investigating the behavior of the correlation length in the system. We find that the increase of the relaxation time is accompanied by a very slow growth of the correlation length. We discuss the scaling properties of off-equilibrium dynamics in the glassy regime, finding qualitative agreement with the mean-field theory.
Resumo:
We report the results of Monte Carlo simulations with the aim to clarify the microscopic origin of exchange bias in the magnetization hysteresis loops of a model of individual core/shell nanoparticles. Increase of the exchange coupling across the core/shell interface leads to an enhancement of exchange bias and to an increasing asymmetry between the two branches of the loops which is due to different reversal mechanisms. A detailed study of the magnetic order of the interfacial spins shows compelling evidence that the existence of a net magnetization due to uncompensated spins at the shell interface is responsible for both phenomena and allows to quantify the loop shifts directly in terms of microscopic parameters with striking agreement with the macroscopic observed values.
Resumo:
A new arena for the dynamics of spacetime is proposed, in which the basic quantum variable is the two-point distance on a metric space. The scaling dimension (that is, the Kolmogorov capacity) in the neighborhood of each point then defines in a natural way a local concept of dimension. We study our model in the region of parameter space in which the resulting spacetime is not too different from a smooth manifold.
Resumo:
We study a model for water with a tunable intramolecular interaction Js, using mean-field theory and off-lattice Monte Carlo simulations. For all Js>~0, the model displays a temperature of maximum density. For a finite intramolecular interaction Js>0, our calculations support the presence of a liquid-liquid phase transition with a possible liquid-liquid critical point for water, likely preempted by inevitable freezing. For J=0, the liquid-liquid critical point disappears at T=0.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
The present study discusses retention criteria for principal components analysis (PCA) applied to Likert scale items typical in psychological questionnaires. The main aim is to recommend applied researchers to restrain from relying only on the eigenvalue-than-one criterion; alternative procedures are suggested for adjusting for sampling error. An additional objective is to add evidence on the consequences of applying this rule when PCA is used with discrete variables. The experimental conditions were studied by means of Monte Carlo sampling including several sample sizes, different number of variables and answer alternatives, and four non-normal distributions. The results suggest that even when all the items and thus the underlying dimensions are independent, eigenvalues greater than one are frequent and they can explain up to 80% of the variance in data, meeting the empirical criterion. The consequences of using Kaiser"s rule are illustrated with a clinical psychology example. The size of the eigenvalues resulted to be a function of the sample size and the number of variables, which is also the case for parallel analysis as previous research shows. To enhance the application of alternative criteria, an R package was developed for deciding the number of principal components to retain by means of confidence intervals constructed about the eigenvalues corresponding to lack of relationship between discrete variables.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.
Resumo:
If single case experimental designs are to be used to establish guidelines for evidence-based interventions in clinical and educational settings, numerical values that reflect treatment effect sizes are required. The present study compares four recently developed procedures for quantifying the magnitude of intervention effect using data with known characteristics. Monte Carlo methods were used to generate AB designs data with potential confounding variables (serial dependence, linear and curvilinear trend, and heteroscedasticity between phases) and two types of treatment effect (level and slope change). The results suggest that data features are important for choosing the appropriate procedure and, thus, inspecting the graphed data visually is a necessary initial stage. In the presence of serial dependence or a change in data variability, the Nonoverlap of All Pairs (NAP) and the Slope and Level Change (SLC) were the only techniques of the four examined that performed adequately. Introducing a data correction step in NAP renders it unaffected by linear trend, as is also the case for the Percentage of Nonoverlapping Corrected Data and SLC. The performance of these techniques indicates that professionals" judgments concerning treatment effectiveness can be readily complemented by both visual and statistical analyses. A flowchart to guide selection of techniques according to the data characteristics identified by visual inspection is provided.
Resumo:
We report on the study of nonequilibrium ordering in the reaction-diffusion lattice gas. It is a kinetic model that relaxes towards steady states under the simultaneous competition of a thermally activated creation-annihilation $(reaction$) process at temperature T, and a diffusion process driven by a heat bath at temperature T?T. The phase diagram as one varies T and T, the system dimension d, the relative priori probabilities for the two processes, and their dynamical rates is investigated. We compare mean-field theory, new Monte Carlo data, and known exact results for some limiting cases. In particular, no evidence of Landau critical behavior is found numerically when d=2 for Metropolis rates but Onsager critical points and a variety of first-order phase transitions.
Resumo:
Simulation is a useful tool in cardiac SPECT to assess quantification algorithms. However, simple equation-based models are limited in their ability to simulate realistic heart motion and perfusion. We present a numerical dynamic model of the left ventricle, which allows us to simulate normal and anomalous cardiac cycles, as well as perfusion defects. Bicubic splines were fitted to a number of control points to represent endocardial and epicardial surfaces of the left ventricle. A transformation from each point on the surface to a template of activity was made to represent the myocardial perfusion. Geometry-based and patient-based simulations were performed to illustrate this model. Geometry-based simulations modeled ~1! a normal patient, ~2! a well-perfused patient with abnormal regional function, ~3! an ischaemic patient with abnormal regional function, and ~4! a patient study including tracer kinetics. Patient-based simulation consisted of a left ventricle including a realistic shape and motion obtained from a magnetic resonance study. We conclude that this model has the potential to study the influence of several physical parameters and the left ventricle contraction in myocardial perfusion SPECT and gated-SPECT studies.
Resumo:
Monte Carlo (MC) simulations have been used to study the structure of an intermediate thermal phase of poly(R-octadecyl ç,D-glutamate). This is a comblike poly(ç-peptide) able to adopt a biphasic structure that has been described as a layered arrangement of backbone helical rods immersed in a paraffinic pool of polymethylene side chains. Simulations were performed at two different temperatures (348 and 363 K), both of them above the melting point of the paraffinic phase, using the configurational bias MC algorithm. Results indicate that layers are constituted by a side-by-side packing of 17/5 helices. The organization of the interlayer paraffinic region is described in atomistic terms by examining the torsional angles and the end-to-end distances for the octadecyl side chains. Comparison with previously reported comblike poly(â-peptide)s revealed significant differences in the organization of the alkyl side chains.
Resumo:
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. In order to obtain information about each possible data division we carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. Robustness and power are studied under several experimental conditions: different autocorrelation levels and different effect sizes, as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. Satisfactory Type II error rates were obtained only for large treatment effects. The relationship between the lengths of the four phases appeared to be an important factor for the robustness and the power of the randomization test.
Resumo:
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.