992 resultados para Monte-carlo Calculations
Resumo:
We consider damage spreading transitions in the framework of mode-coupling theory. This theory describes relaxation processes in glasses in the mean-field approximation which are known to be characterized by the presence of an exponentially large number of metastable states. For systems evolving under identical but arbitrarily correlated noises, we demonstrate that there exists a critical temperature T0 which separates two different dynamical regimes depending on whether damage spreads or not in the asymptotic long-time limit. This transition exists for generic noise correlations such that the zero damage solution is stable at high temperatures, being minimal for maximal noise correlations. Although this dynamical transition depends on the type of noise correlations, we show that the asymptotic damage has the good properties of a dynamical order parameter, such as (i) independence of the initial damage; (ii) independence of the class of initial condition; and (iii) stability of the transition in the presence of asymmetric interactions which violate detailed balance. For maximally correlated noises we suggest that damage spreading occurs due to the presence of a divergent number of saddle points (as well as metastable states) in the thermodynamic limit consequence of the ruggedness of the free-energy landscape which characterizes the glassy state. These results are then compared to extensive numerical simulations of a mean-field glass model (the Bernasconi model) with Monte Carlo heat-bath dynamics. The freedom of choosing arbitrary noise correlations for Langevin dynamics makes damage spreading an interesting tool to probe the ruggedness of the configurational landscape.
Resumo:
Magnetic properties of Fe nanodots are simulated using a scaling technique and Monte Carlo method, in good agreement with experimental results. For the 20-nm-thick dots with diameters larger than 60¿nm, the magnetization reversal via vortex state is observed. The role of magnetic interaction between dots in arrays in the reversal process is studied as a function of nanometric center-to-center distance. When this distance is more than twice the dot diameter, the interaction can be neglected and the magnetic properties of the entire array are determined by the magnetic configuration of the individual dots. The effect of crystalline anisotropy on the vortex state is investigated. For arrays of noninteracting dots, the anisotropy strongly affects the vortex nucleation field and coercivity, and only slightly affects the vortex annihilation field
Resumo:
We have included the effective description of squark interactions with charginos/neutralinos in the MadGraph MSSM model. This effective description includes the effective Yukawa couplings, and another logarithmic term which encodes the supersymmetry-breaking. We have performed an extensive test of our implementation analyzing the results of the partial decay widths of squarks into charginos and neutralinos obtained by using FeynArts/FormCalc programs and the new model file in MadGraph. We present results for the cross-section of top-squark production decaying into charginos and neutralinos.
Resumo:
DnaSP is a software package for the analysis of DNA polymorphism data. Present version introduces several new modules and features which, among other options allow: (1) handling big data sets (~5 Mb per sequence); (2) conducting a large number of coalescent-based tests by Monte Carlo computer simulations; (3) extensive analyses of the genetic differentiation and gene flow among populations; (4) analysing the evolutionary pattern of preferred and unpreferred codons; (5) generating graphical outputs for an easy visualization of results. Availability: The software package, including complete documentation and examples, is freely available to academic users from: http://www.ub.es/dnasp
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
The effect of hydrodynamic flow upon diffusion-limited deposition on a line is investigated using a Monte Carlo model. The growth process is governed by the convection and diffusion field. The convective diffusion field is simulated by the biased-random walker resulting from a superimposed drift that represents the convective flow. The development of distinct morphologies is found with varying direction and strength of drift. By introducing a horizontal drift parallel to the deposition plate, the diffusion-limited deposit changes into a single needle inclined to the plate. The width of the needle decreases with increasing strength of drift. The angle between the needle and the plate is about 45° at high flow rate. In the presence of an inclined drift to the plate, the convection-diffusion-limited deposit leads to the formation of a characteristic columnar morphology. In the limiting case where the convection dominates, the deposition process is equivalent to ballistic deposition onto an inclined surface.
Resumo:
The diffusion of passive scalars convected by turbulent flows is addressed here. A practical procedure to obtain stochastic velocity fields with well¿defined energy spectrum functions is also presented. Analytical results are derived, based on the use of stochastic differential equations, where the basic hypothesis involved refers to a rapidly decaying turbulence. These predictions are favorable compared with direct computer simulations of stochastic differential equations containing multiplicative space¿time correlated noise.
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
We formulate a new mixing model to explore hydrological and chemical conditions under which the interface between the stream and catchment interface (SCI) influences the release of reactive solutes into stream water during storms. Physically, the SCI corresponds to the hyporheic/riparian sediments. In the new model this interface is coupled through a bidirectional water exchange to the conventional two components mixing model. Simulations show that the influence of the SCI on stream solute dynamics during storms is detectable when the runoff event is dominated by the infiltrated groundwater component that flows through the SCI before entering the stream and when the flux of solutes released from SCI sediments is similar to, or higher than, the solute flux carried by the groundwater. Dissolved organic carbon (DOC) and nitrate data from two small Mediterranean streams obtained during storms are compared to results from simulations using the new model to discern the circumstances under which the SCI is likely to control the dynamics of reactive solutes in streams. The simulations and the comparisons with empirical data suggest that the new mixing model may be especially appropriate for streams in which the periodic, or persistent, abrupt changes in the level of riparian groundwater exert hydrologic control on flux of biologically reactive fluxes between the riparian/hyporheic compartment and the stream water.
Resumo:
Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.
Resumo:
BACKGROUND: Anal condylomata acuminata (ACA) are caused by human papilloma virus (HPV) infection which is transmitted by close physical and sexual contact. The result of surgical treatment of ACA has an overall success rate of 71% to 93%, with a recurrence rate between 4% and 29%. The aim of this study was to assess a possible association between HPV type and ACA recurrence after surgical treatment. METHODS: We performed a retrospective analysis of 140 consecutive patients who underwent surgery for ACA from January 1990 to December 2005 at our tertiary University Hospital. We confirmed ACA by histopathological analysis and determined the HPV typing using the polymerase chain reaction. Patients gave consent for HPV testing and completed a questionnaire. We looked at the association of ACA, HPV typing, and HIV disease. We used chi, the Monte Carlo simulation, and Wilcoxon tests for statistical analysis. RESULTS: Among the 140 patients (123 M/17 F), HPV 6 and 11 were the most frequently encountered viruses (51% and 28%, respectively). Recurrence occurred in 35 (25%) patients. HPV 11 was present in 19 (41%) of these recurrences, which is statistically significant, when compared with other HPVs. There was no significant difference between recurrence rates in the 33 (24%) HIV-positive and the HIV-negative patients. CONCLUSIONS: HPV 11 is associated with higher recurrence rate of ACA. This makes routine clinical HPV typing questionable. Follow-up is required to identify recurrence and to treat it early, especially if HPV 11 has been identified.