1000 resultados para Statistical physics.
Resumo:
A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.
Resumo:
The performance of "typical set (pairs) decoding" for ensembles of Gallager's linear code is investigated using statistical physics. In this decoding method, errors occur, either when the information transmission is corrupted by atypical noise, or when multiple typical sequences satisfy the parity check equation as provided by the received corrupted codeword. We show that the average error rate for the second type of error over a given code ensemble can be accurately evaluated using the replica method, including the sensitivity to message length. Our approach generally improves the existing analysis known in the information theory community, which was recently reintroduced in IEEE Trans. Inf. Theory 45, 399 (1999), and is believed to be the most accurate to date. © 2002 The American Physical Society.
Resumo:
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Resumo:
Sparse code division multiple access (CDMA), a variation on the standard CDMA method in which the spreading (signature) matrix contains only a relatively small number of nonzero elements, is presented and analysed using methods of statistical physics. The analysis provides results on the performance of maximum likelihood decoding for sparse spreading codes in the large system limit. We present results for both cases of regular and irregular spreading matrices for the binary additive white Gaussian noise channel (BIAWGN) with a comparison to the canonical (dense) random spreading code. © 2007 IOP Publishing Ltd.
Resumo:
Advances in statistical physics relating to our understanding of large-scale complex systems have recently been successfully applied in the context of communication networks. Statistical mechanics methods can be used to decompose global system behavior into simple local interactions. Thus, large-scale problems can be solved or approximated in a distributed manner with iterative lightweight local messaging. This survey discusses how statistical physics methodology can provide efficient solutions to hard network problems that are intractable by classical methods. We highlight three typical examples in the realm of networking and communications. In each case we show how a fundamental idea of statistical physics helps solve the problem in an efficient manner. In particular, we discuss how to perform multicast scheduling with message passing methods, how to improve coding using the crystallization process, and how to compute optimal routing by representing routes as interacting polymers.
Resumo:
Many practical routing algorithms are heuristic, adhoc and centralized, rendering generic and optimal path configurations difficult to obtain. Here we study a scenario whereby selected nodes in a given network communicate with fixed routers and employ statistical physics methods to obtain optimal routing solutions subject to a generic cost. A distributive message-passing algorithm capable of optimizing the path configuration in real instances is devised, based on the analytical derivation, and is greatly simplified by expanding the cost function around the optimized flow. Good algorithmic convergence is observed in most of the parameter regimes. By applying the algorithm, we study and compare the pros and cons of balanced traffic configurations to that of consolidated traffic, which provides important implications to practical communication and transportation networks. Interesting macroscopic phenomena are observed from the optimized states as an interplay between the communication density and the cost functions used. © 2013 IEEE.
Resumo:
The main purpose of this thesis is to go beyond two usual assumptions that accompany theoretical analysis in spin-glasses and inference: the i.i.d. (independently and identically distributed) hypothesis on the noise elements and the finite rank regime. The first one appears since the early birth of spin-glasses. The second one instead concerns the inference viewpoint. Disordered systems and Bayesian inference have a well-established relation, evidenced by their continuous cross-fertilization. The thesis makes use of techniques coming both from the rigorous mathematical machinery of spin-glasses, such as the interpolation scheme, and from Statistical Physics, such as the replica method. The first chapter contains an introduction to the Sherrington-Kirkpatrick and spiked Wigner models. The first is a mean field spin-glass where the couplings are i.i.d. Gaussian random variables. The second instead amounts to establish the information theoretical limits in the reconstruction of a fixed low rank matrix, the “spike”, blurred by additive Gaussian noise. In chapters 2 and 3 the i.i.d. hypothesis on the noise is broken by assuming a noise with inhomogeneous variance profile. In spin-glasses this leads to multi-species models. The inferential counterpart is called spatial coupling. All the previous models are usually studied in the Bayes-optimal setting, where everything is known about the generating process of the data. In chapter 4 instead we study the spiked Wigner model where the prior on the signal to reconstruct is ignored. In chapter 5 we analyze the statistical limits of a spiked Wigner model where the noise is no longer Gaussian, but drawn from a random matrix ensemble, which makes its elements dependent. The thesis ends with chapter 6, where the challenging problem of high-rank probabilistic matrix factorization is tackled. Here we introduce a new procedure called "decimation" and we show that it is theoretically to perform matrix factorization through it.
Resumo:
A new isotherm is proposed here for adsorption of condensable vapors and gases on nonporous materials having type II isotherms according to the Brunauer-Deming-Deming-Teller (BDDH) classification. The isotherm combines the recent molecular-continuum model in the multilayer region, with other widely used models for sub-monolayer coverage, some of which satisfy the requirement of a Henry's law asymptote. The model is successfully tested using isotherm data for nitrogen adsorption on nonporous silica, carbon and alumina, as well as benzene and hexane adsorption on nonporous carbon. Based on the data fits, out of several different alternative choices of model for the monolayer region, the Freundlich and the Unilan models are found to be the most successful when combined with the multilayer model to predict the whole isotherm. The hybrid model is consequently applicable over a wide pressure range. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
We prove that, once an algorithm of perfect simulation for a stationary and ergodic random field F taking values in S(Zd), S a bounded subset of R(n), is provided, the speed of convergence in the mean ergodic theorem occurs exponentially fast for F. Applications from (non-equilibrium) statistical mechanics and interacting particle systems are presented.
Resumo:
We consider a kinetic Ising model which represents a generic agent-based model for various types of socio-economic systems. We study the case of a finite (and not necessarily large) number of agents N as well as the asymptotic case when the number of agents tends to infinity. The main ingredient are individual decision thresholds which are either fixed over time (corresponding to quenched disorder in the Ising model, leading to nonlinear deterministic dynamics which are generically non-ergodic) or which may change randomly over time (corresponding to annealed disorder, leading to ergodic dynamics). We address the question how increasing the strength of annealed disorder relative to quenched disorder drives the system from non-ergodic behavior to ergodicity. Mathematically rigorous analysis provides an explicit and detailed picture for arbitrary realizations of the quenched initial thresholds, revealing an intriguing ""jumpy"" transition from non-ergodicity with many absorbing sets to ergodicity. For large N we find a critical strength of annealed randomness, above which the system becomes asymptotically ergodic. Our theoretical results suggests how to drive a system from an undesired socio-economic equilibrium (e. g. high level of corruption) to a desirable one (low level of corruption).
Resumo:
We calculate the density profiles and density correlation functions of the one-dimensional Bose gas in a harmonic trap, using the exact finite-temperature solutions for the uniform case, and applying a local density approximation. The results are valid for a trapping potential that is slowly varying relative to a correlation length. They allow a direct experimental test of the transition from the weak-coupling Gross-Pitaevskii regime to the strong-coupling, fermionic Tonks-Girardeau regime. We also calculate the average two-particle correlation which characterizes the bulk properties of the sample, and find that it can be well approximated by the value of the local pair correlation in the trap center.
Resumo:
The technique of permanently attaching piezoelectric transducers to structural surfaces has demonstrated great potential for quantitative non-destructive evaluation and smart materials design. For thin structural members such as composite laminated plates, it has been well recognized that guided Lamb wave techniques can provide a very sensitive and effective means for large area interrogation. However, since in these applications multiple wave modes are generally generated and the individual modes are usually dispersive, the received signals are very complex and difficult to interpret. An attractive way to deal with this problem has recently been introduced by applying piezoceramic transducer arrays or interdigital transducer (IDT) technologies. In this paper, the acoustic wave field in composite laminated plates excited by piezoceramic transducer arrays or IDT is investigated. Based on dynamic piezoelectricity theory, a discrete layer theory and a multiple integral transform method, an analytical-numerical approach is developed to evaluate the input impedance characteristics of the transducer and the surface velocity response of the plate. The method enables the quantitative evaluation of the influence of the electrical characteristics of the excitation circuit, the geometric and piezoelectric properties of the transducer array, and the mechanical and geometrical features of the laminate. Numerical results are presented to validate the developed method and show the ability of single wave mode selection and isolation. The results show that the interaction between individual elements of the piezoelectric array has a significant influence on the performance of the IDT, and these effects can not be neglected even in the case of low frequency excitation. It is also demonstrated that adding backing materials to the transducer elements can be used to improve the excitability of specific wave modes. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Advances in technology have produced more and more intricate industrial systems, such as nuclear power plants, chemical centers and petroleum platforms. Such complex plants exhibit multiple interactions among smaller units and human operators, rising potentially disastrous failure, which can propagate across subsystem boundaries. This paper analyzes industrial accident data-series in the perspective of statistical physics and dynamical systems. Global data is collected from the Emergency Events Database (EM-DAT) during the time period from year 1903 up to 2012. The statistical distributions of the number of fatalities caused by industrial accidents reveal Power Law (PL) behavior. We analyze the evolution of the PL parameters over time and observe a remarkable increment in the PL exponent during the last years. PL behavior allows prediction by extrapolation over a wide range of scales. In a complementary line of thought, we compare the data using appropriate indices and use different visualization techniques to correlate and to extract relationships among industrial accident events. This study contributes to better understand the complexity of modern industrial accidents and their ruling principles.
Resumo:
In a seminal paper [10], Weitz gave a deterministic fully polynomial approximation scheme for counting exponentially weighted independent sets (which is the same as approximating the partition function of the hard-core model from statistical physics) in graphs of degree at most d, up to the critical activity for the uniqueness of the Gibbs measure on the innite d-regular tree. ore recently Sly [8] (see also [1]) showed that this is optimal in the sense that if here is an FPRAS for the hard-core partition function on graphs of maximum egree d for activities larger than the critical activity on the innite d-regular ree then NP = RP. In this paper we extend Weitz's approach to derive a deterministic fully polynomial approximation scheme for the partition function of general two-state anti-ferromagnetic spin systems on graphs of maximum degree d, up to the corresponding critical point on the d-regular tree. The main ingredient of our result is a proof that for two-state anti-ferromagnetic spin systems on the d-regular tree, weak spatial mixing implies strong spatial mixing. his in turn uses a message-decay argument which extends a similar approach proposed recently for the hard-core model by Restrepo et al [7] to the case of general two-state anti-ferromagnetic spin systems.
Resumo:
Long polymers in solution frequently adopt knotted configurations. To understand the physical properties of knotted polymers, it is important to find out whether the knots formed at thermodynamic equilibrium are spread over the whole polymer chain or rather are localized as tight knots. We present here a method to analyze the knottedness of short linear portions of simulated random chains. Using this method, we observe that knot-determining domains are usually very tight, so that, for example, the preferred size of the trefoil-determining portions of knotted polymer chains corresponds to just seven freely jointed segments.