48 resultados para statistical physics
Resumo:
We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator , rather than on the weight enumerator employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
We combine the replica approach from statistical physics with a variational approach to analyze learning curves analytically. We apply the method to Gaussian process regression. As a main result we derive approximative relations between empirical error measures, the generalization error and the posterior variance.
Resumo:
The replica method, developed in statistical physics, is employed in conjunction with Gallager's methodology to accurately evaluate zero error noise thresholds for Gallager code ensembles. Our approach generally provides more optimistic evaluations than those reported in the information theory literature for sparse matrices; the difference vanishes as the parity check matrix becomes dense.
Resumo:
We present a theoretical method for a direct evaluation of the average error exponent in Gallager error-correcting codes using methods of statistical physics. Results for the binary symmetric channel(BSC)are presented for codes of both finite and infinite connectivity.
Resumo:
The problem of resource allocation in sparse graphs with real variables is studied using methods of statistical physics. An efficient distributed algorithm is devised on the basis of insight gained from the analysis and is examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.
Resumo:
The optimization of resource allocation in sparse networks with real variables is studied using methods of statistical physics. Efficient distributed algorithms are devised on the basis of insight gained from the analysis and are examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.
Resumo:
We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel are presented for codes of both finite and infinite connectivity.
Resumo:
Random Boolean formulae, generated by a growth process of noisy logical gates are analyzed using the generating functional methodology of statistical physics. We study the type of functions generated for different input distributions, their robustness for a given level of gate error and its dependence on the formulae depth and complexity and the gates used. Bounds on their performance, derived in the information theory literature for specific gates, are straightforwardly retrieved, generalized and identified as the corresponding typical-case phase transitions. Results for error-rates, function-depth and sensitivity of the generated functions are obtained for various gate-type and noise models. © 2010 IOP Publishing Ltd.
Resumo:
The compaction behaviour of powders with soft and hard components is of particular interest to the paint processing industry. Unfortunately, at the present time, very little is known about the internal mechanisms within such systems and therefore suitable tests are required to help in the interpretative process. The TRUBAL, Distinct Element Method (D.E.M.) program was the method of investigation used in this study. Steel (hard) and rubber (soft) particles were used in the randomly-generated, binary assemblies because they provided a sharp contrast in physical properties. For reasons of simplicity, isotropic compression of two-dimensional assemblies was also initially considered. The assemblies were first subject to quasi-static compaction, in order to define their behaviour under equilibrium conditions. The stress-strain behaviour of the assemblies under such conditions was found to be adequately described by a second-order polynomial expansion. The structural evolution of the simulation assemblies was also similar to that observed for real powder systems. Further simulation tests were carried out to investigate the effects of particle size on the compaction behaviour of the two-dimensional, binary assemblies. Later work focused on the quasi-static compaction behaviour of three-dimensional assemblies, because they represented more realistic particle systems. The compaction behaviour of the assemblies during the simulation experiments was considered in terms of percolation theory concepts, as well as more familiar macroscopic and microstructural parameters. Percolation theory, which is based on ideas from statistical physics, has been found to be useful in the interpretation of the mechanical behaviour of simple, elastic lattices. However, from the evidence of this study, percolation theory is also able to offer a useful insight into the compaction behaviour of more realistic particle assemblies.
Resumo:
This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.
Resumo:
The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error in UCMs with L hidden layers and for a large enough number of units. In most of the regimes studied we find that the generalization error, as a function of the number of examples presented, develops a discontinuous drop at a critical value of the load parameter. We also find that when L>1 a number of teacher networks with the same number of hidden layers and different overlaps induce learning processes with the same critical points.
Resumo:
Measurement of lung ventilation is one of the most reliable techniques in diagnosing pulmonary diseases. The time-consuming and bias-prone traditional methods using hyperpolarized H 3He and 1H magnetic resonance imageries have recently been improved by an automated technique based on 'multiple active contour evolution'. This method involves a simultaneous evolution of multiple initial conditions, called 'snakes', eventually leading to their 'merging' and is entirely independent of the shapes and sizes of snakes or other parametric details. The objective of this paper is to show, through a theoretical analysis, that the functional dynamics of merging as depicted in the active contour method has a direct analogue in statistical physics and this explains its 'universality'. We show that the multiple active contour method has an universal scaling behaviour akin to that of classical nucleation in two spatial dimensions. We prove our point by comparing the numerically evaluated exponents with an equivalent thermodynamic model. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.
Resumo:
We determine the critical noise level for decoding low-density parity check error-correcting codes based on the magnetization enumerator (M), rather than on the weight enumerator (W) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived using the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
The increase in renewable energy generators introduced into the electricity grid is putting pressure on its stability and management as predictions of renewable energy sources cannot be accurate or fully controlled. This, with the additional pressure of fluctuations in demand, presents a problem more complex than the current methods of controlling electricity distribution were designed for. A global approximate and distributed optimisation method for power allocation that accommodates uncertainties and volatility is suggested and analysed. It is based on a probabilistic method known as message passing [1], which has deep links to statistical physics methodology. This principled method of optimisation is based on local calculations and inherently accommodates uncertainties; it is of modest computational complexity and provides good approximate solutions.We consider uncertainty and fluctuations drawn from a Gaussian distribution and incorporate them into the message-passing algorithm. We see the effect that increasing uncertainty has on the transmission cost and how the placement of volatile nodes within a grid, such as renewable generators or consumers, effects it.
Resumo:
We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.