16 resultados para classical Monte Carlo simulations
em Universidade Complutense de Madrid
Resumo:
We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
The Hybrid Monte Carlo algorithm is adapted to the simulation of a system of classical degrees of freedom coupled to non self-interacting lattices fermions. The diagonalization of the Hamiltonian matrix is avoided by introducing a path-integral formulation of the problem, in d + 1 Euclidean space–time. A perfect action formulation allows to work on the continuum Euclidean time, without need for a Trotter–Suzuki extrapolation. To demonstrate the feasibility of the method we study the Double Exchange Model in three dimensions. The complexity of the algorithm grows only as the system volume, allowing to simulate in lattices as large as 163 on a personal computer. We conclude that the second order paramagnetic–ferromagnetic phase transition of Double Exchange Materials close to half-filling belongs to the Universality Class of the three-dimensional classical Heisenberg model.
Resumo:
In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the twodimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.
Resumo:
A new Monte Carlo algorithm is introduced for the simulation of supercooled liquids and glass formers, and tested in two model glasses. The algorithm thermalizes well below the Mode Coupling temperature and outperforms other optimized Monte Carlo methods.
Resumo:
It is shown that a bosonic formulation of the double-exchange model, one of the classical models for magnetism, generates dynamically a gauge-invariant phase in a finite region of the phase diagram. We use analytical methods, Monte Carlo simulations and finite-size scaling analysis. We study the transition line between that region and the paramagnetic phase. The numerical results show that this transition line belongs to the universality class of the antiferromagnetic RP^(2) model. The fact that one can define a universality class for the antiferromagnetic RP^(2) model, different from the one of the O(N) models, is puzzling and somehow contradicts naive expectations about universality.
Resumo:
We study the phase diagram of the double exchange model, with antiferromagnetic interactions, in a cubic lattice both at zero and finite temperature. There is a rich variety of magnetic phases, combined with regions where phase separation takes place. We identify phases, intrinsic to the cubic lattice, which are stable for realistic values of the interactions and dopings. Some of these phases break chiral symmetry, leading to unusual features.
Resumo:
We study a polydisperse soft-spheres model for colloids by means of microcanonical Monte Carlo simulations. We consider a polydispersity as high as 24%. Although solidification occurs, neither a crystal nor an amorphous state are thermodynamically stable. A finite size scaling analysis reveals that in the thermodynamic limit: a the fluid-solid transition is rather a crystal-amorphous phase-separation, b such phase-separation is preceded by the dynamic glass transition, and c small and big particles arrange themselves in the two phases according to a complex pattern not predicted by any fractionation scenario.
Resumo:
This paper describes JANUS, a modular massively parallel and reconfigurable FPGA-based computing system. Each JANUS module has a computational core and a host. The computational core is a 4x4 array of FPGA-based processing elements with nearest-neighbor data links. Processors are also directly connected to an I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for, but not limited to, the requirements of a class of hard scientific applications characterized by regular code structure, unconventional data manipulation instructions and not too large data-base size. We discuss the architecture of this configurable machine, and focus on its use on Monte Carlo simulations of statistical mechanics. On this class of application JANUS achieves impressive performances: in some cases one JANUS processing element outperfoms high-end PCs by a factor ≈1000. We also discuss the role of JANUS on other classes of scientific applications.
Resumo:
The cold climate anomaly about 8200 years ago is investigated with CLIMBER-2, a coupled atmosphere-ocean-biosphere model of intermediate complexity. This climate model simulates a cooling of about 3.6 K over the North Atlantic induced by a meltwater pulse from Lake Agassiz routed through the Hudson strait. The meltwater pulse is assumed to have a volume of 1.6 x 10^14 m^3 and a period of discharge of 2 years on the basis of glaciological modeling of the decay of the Laurentide Ice Sheet ( LIS). We present a possible mechanism which can explain the centennial duration of the 8.2 ka cold event. The mechanism is related to the existence of an additional equilibrium climate state with reduced North Atlantic Deep Water (NADW) formation and a southward shift of the NADW formation area. Hints at the additional climate state were obtained from the largely varying duration of the pulse-induced cold episode in response to overlaid random freshwater fluctuations in Monte Carlo simulations. The model equilibrium state was attained by releasing a weak multicentury freshwater flux through the St. Lawrence pathway completed by the meltwater pulse. The existence of such a climate mode appears essential for reproducing climate anomalies in close agreement with paleoclimatic reconstructions of the 8.2 ka event. The results furthermore suggest that the temporal evolution of the cold event was partly a matter of chance.
Resumo:
We investigate the critical properties of the four-state commutative random permutation glassy Potts model in three and four dimensions by means of Monte Carlo simulations and a finite-size scaling analysis. By using a field programmable gate array, we have been able to thermalize a large number of samples of systems with large volume. This has allowed us to observe a spin-glass ordered phase in d=4 and to study the critical properties of the transition. In d=3, our results are consistent with the presence of a Kosterlitz-Thouless transition, but also with different scenarios: transient effects due to a value of the lower critical dimension slightly below 3 could be very important.
Resumo:
The energy spectrum of ultra-high energy cosmic rays above 10(18)eV is measured using the hybrid events collected by the Pierre Auger Observatory between November 2005 and September 2010. The large exposure of the Observatory allows the measurement of the main features of the energy spectrum with high statistics. Full Monte Carlo simulations of the extensive air showers (based on the CORSIKA code) and of the hybrid detector response are adopted here as an independent cross check of the standard analysis (Phys. Lett. B 685, 239 (2010)). The dependence on mass composition and other systematic uncertainties are discussed in detail and, in the full Monte Carlo approach, a region of confidence for flux measurements is defined when all the uncertainties are taken into account. An update is also reported of the energy spectrum obtained by combining the hybrid spectrum and that measured using the surface detector array.
Resumo:
We describe the hardwired implementation of algorithms for Monte Carlo simulations of a large class of spin models. We have implemented these algorithms as VHDL codes and we have mapped them onto a dedicated processor based on a large FPGA device. The measured performance on one such processor is comparable to O(100) carefully programmed high-end PCs: it turns out to be even better for some selected spin models. We describe here codes that we are currently executing on the IANUS massively parallel FPGA-based system.
Resumo:
In this paper we introduce the concept of Lateral Trigger Probability (LTP) function, i.e., the probability for an Extensive Air Shower (EAS) to trigger an individual detector of a ground based array as a function of distance to the shower axis, taking into account energy, mass and direction of the primary cosmic ray. We apply this concept to the surface array of the Pierre Auger Observatory consisting of a 1.5 km spaced grid of about 1600 water Cherenkov stations. Using Monte Carlo simulations of ultra-high energy showers the LTP functions are derived for energies in the range between 10(17) and 10(19) eV and zenith angles up to 65 degrees. A parametrization combining a step function with an exponential is found to reproduce them very well in the considered range of energies and zenith angles. The LTP functions can also be obtained from data using events simultaneously observed by the fluorescence and the surface detector of the Pierre Auger Observatory (hybrid events). We validate the Monte Carlo results showing how LTP functions from data are in good agreement with simulations.