81 resultados para statistical physics
Resumo:
We introduce a simple mean-field lattice model to describe the behavior of nematic elastomers. This model combines the Maier-Saupe-Zwanzig approach to liquid crystals and an extension to lattice systems of the Warner-Terentjev theory of elasticity, with the addition of quenched random fields. We use standard techniques of statistical mechanics to obtain analytic solutions for the full range of parameters. Among other results, we show the existence of a stress-strain coexistence curve below a freezing temperature, analogous to the P-V diagram of a simple fluid, with the disorder strength playing the role of temperature. Below a critical value of disorder, the tie lines in this diagram resemble the experimental stress-strain plateau and may be interpreted as signatures of the characteristic polydomain-monodomain transition. Also, in the monodomain case, we show that random fields may soften the first-order transition between nematic and isotropic phases, provided the samples are formed in the nematic state.
Resumo:
We present the transition amplitude for a particle moving in a space with two times and D space dimensions having an Sp(2, R) local symmetry and an SO(D, 2) rigid symmetry. It was obtained from the BRST-BFV quantization with a unique gauge choice. We show that by constraining the initial and final points of this amplitude to lie on some hypersurface of the D + 2 space the resulting amplitude reproduces well-known systems in lower dimensions. This work provides an alternative way to derive the effects of two-time physics where all the results come from a single transition amplitude.
Resumo:
Gaussianity and statistical isotropy of the Universe are modern cosmology's minimal set of hypotheses. In this work we introduce a new statistical test to detect observational deviations from this minimal set. By defining the temperature correlation function over the whole celestial sphere, we are able to independently quantify both angular and planar dependence (modulations) of the CMB temperature power spectrum over different slices of this sphere. Given that planar dependence leads to further modulations of the usual angular power spectrum C(l), this test can potentially reveal richer structures in the morphology of the primordial temperature field. We have also constructed an unbiased estimator for this angular-planar power spectrum which naturally generalizes the estimator for the usual C(l)'s. With the help of a chi-square analysis, we have used this estimator to search for observational deviations of statistical isotropy in WMAP's 5 year release data set (ILC5), where we found only slight anomalies on the angular scales l = 7 and l = 8. Since this angular-planar statistic is model-independent, it is ideal to employ in searches of statistical anisotropy (e.g., contaminations from the galactic plane) and to characterize non-Gaussianities.
Resumo:
We prove a Goldstone theorem in thermal relativistic quantum field theory, which relates spontaneous symmetry breaking to the rate of spacelike decay of the two-point function. The critical rate of fall-off coincides with that of the massless free scalar field theory. Related results and open problems are briefly discussed. (C) 2011 American Institute of Physics. [doi:10.1063/1.3526961]
Resumo:
Eleven density functionals are compared with regard to their performance for the lattice constants of solids. We consider standard functionals, such as the local-density approximation and the Perdew-Burke-Ernzerhof (PBE) generalized-gradient approximation (GGA), as well as variations of PBE GGA, such as PBEsol and similar functionals, PBE-type functionals employing a tighter Lieb-Oxford bound, and combinations thereof. On a test set of 60 solids, we perform a system-by-system analysis for selected functionals and a full statistical analysis for all of them. The impact of restoring the gradient expansion and of tightening the Lieb-Oxford bound is discussed, and confronted with previous results obtained from other codes, functionals or test sets. No functional is uniformly good for all investigated systems, but surprisingly, and pleasingly, the simplest possible modifications to PBE turn out to have the most beneficial effect on its performance. The atomization energy of molecules was also considered and on a testing set of six molecules, we found that the PBE functional is clearly the best, the others leading to strong overbinding.
Resumo:
Identified charged pion, kaon, and proton spectra are used to explore the system size dependence of bulk freeze-out properties in Cu + Cu collisions at root s(NN) = 200 and 62.4 GeV. The data are studied with hydrodynamically motivated blast-wave and statistical model frameworks in order to characterize the freeze-out properties of the system. The dependence of freeze-out parameters on beam energy and collision centrality is discussed. Using the existing results from Au + Au and pp collisions, the dependence of freeze-out parameters on the system size is also explored. This multidimensional systematic study furthers our understanding of the QCD phase diagram revealing the importance of the initial geometrical overlap of the colliding ions. The analysis of Cu + Cu collisions expands the system size dependence studies from Au + Au data with detailed measurements in the smaller system. The systematic trends of the bulk freeze-out properties of charged particles is studied with respect to the total charged particle multiplicity at midrapidity, exploring the influence of initial state effects.
Resumo:
We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.
Resumo:
We numerically study the dynamics of a discrete spring-block model introduced by Olami, Feder, and Christensen (OFC) to mimic earthquakes and investigate to what extent this simple model is able to reproduce the observed spatiotemporal clustering of seismicity. Following a recently proposed method to characterize such clustering by networks of recurrent events [J. Davidsen, P. Grassberger, and M. Paczuski, Geophys. Res. Lett. 33, L11304 (2006)], we find that for synthetic catalogs generated by the OFC model these networks have many nontrivial statistical properties. This includes characteristic degree distributions, very similar to what has been observed for real seismicity. There are, however, also significant differences between the OFC model and earthquake catalogs, indicating that this simple model is insufficient to account for certain aspects of the spatiotemporal clustering of seismicity.
Resumo:
Finite-size scaling analysis turns out to be a powerful tool to calculate the phase diagram as well as the critical properties of two-dimensional classical statistical mechanics models and quantum Hamiltonians in one dimension. The most used method to locate quantum critical points is the so-called crossing method, where the estimates are obtained by comparing the mass gaps of two distinct lattice sizes. The success of this method is due to its simplicity and the ability to provide accurate results even considering relatively small lattice sizes. In this paper, we introduce an estimator that locates quantum critical points by exploring the known distinct behavior of the entanglement entropy in critical and noncritical systems. As a benchmark test, we use this new estimator to locate the critical point of the quantum Ising chain and the critical line of the spin-1 Blume-Capel quantum chain. The tricritical point of this last model is also obtained. Comparison with the standard crossing method is also presented. The method we propose is simple to implement in practice, particularly in density matrix renormalization group calculations, and provides us, like the crossing method, amazingly accurate results for quite small lattice sizes. Our applications show that the proposed method has several advantages, as compared with the standard crossing method, and we believe it will become popular in future numerical studies.
Resumo:
Online music databases have increased significantly as a consequence of the rapid growth of the Internet and digital audio, requiring the development of faster and more efficient tools for music content analysis. Musical genres are widely used to organize music collections. In this paper, the problem of automatic single and multi-label music genre classification is addressed by exploring rhythm-based features obtained from a respective complex network representation. A Markov model is built in order to analyse the temporal sequence of rhythmic notation events. Feature analysis is performed by using two multi-variate statistical approaches: principal components analysis (unsupervised) and linear discriminant analysis (supervised). Similarly, two classifiers are applied in order to identify the category of rhythms: parametric Bayesian classifier under the Gaussian hypothesis (supervised) and agglomerative hierarchical clustering (unsupervised). Qualitative results obtained by using the kappa coefficient and the obtained clusters corroborated the effectiveness of the proposed method.
Resumo:
We investigate the performance of a variant of Axelrod's model for dissemination of culture-the Adaptive Culture Heuristic (ACH)-on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N(1/4) so that the number of agents must increase with the fourth power of the problem size, N proportional to F(4), to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F(6) which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.
Resumo:
The parallel mutation-selection evolutionary dynamics, in which mutation and replication are independent events, is solved exactly in the case that the Malthusian fitnesses associated to the genomes are described by the random energy model (REM) and by a ferromagnetic version of the REM. The solution method uses the mapping of the evolutionary dynamics into a quantum Ising chain in a transverse field and the Suzuki-Trotter formalism to calculate the transition probabilities between configurations at different times. We find that in the case of the REM landscape the dynamics can exhibit three distinct regimes: pure diffusion or stasis for short times, depending on the fitness of the initial configuration, and a spin-glass regime for large times. The dynamic transition between these dynamical regimes is marked by discontinuities in the mean-fitness as well as in the overlap with the initial reference sequence. The relaxation to equilibrium is described by an inverse time decay. In the ferromagnetic REM, we find in addition to these three regimes, a ferromagnetic regime where the overlap and the mean-fitness are frozen. In this case, the system relaxes to equilibrium in a finite time. The relevance of our results to information processing aspects of evolution is discussed.
Resumo:
This paper presents a description of nuclear magnetic resonance (NMR) of quadrupolar systems using the Holstein-Primakoff (HP) formalism and its analogy with a Bose-Einstein condensate (BEC) system. Two nuclear spin systems constituted of quadrupolar nuclei I=3/2 ((23)Na) and I=7/2 ((133)Cs) in lyotropic liquid crystals were used for experimental demonstrations. Specifically, we derived the conditions necessary for accomplishing the analogy, executed the proper experiments, and compared with quantum mechanical prediction for a Bose system. The NMR description in the HP representation could be applied in the future as a workbench for BEC-like systems, where the statistical properties may be obtained using the intermediate statistic, first established by Gentile. The description can be applied for any quadrupolar systems, including new developed solid-state NMR GaAS nanodevices.
Resumo:
The mapping, exact or approximate, of a many-body problem onto an effective single-body problem is one of the most widely used conceptual and computational tools of physics. Here, we propose and investigate the inverse map of effective approximate single-particle equations onto the corresponding many-particle system. This approach allows us to understand which interacting system a given single-particle approximation is actually describing, and how far this is from the original physical many-body system. We illustrate the resulting reverse engineering process by means of the Kohn-Sham equations of density-functional theory. In this application, our procedure sheds light on the nonlocality of the density-potential mapping of density-functional theory, and on the self-interaction error inherent in approximate density functionals.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.