50 resultados para Phase transformations (Statistical physics)
Resumo:
We employ the methods of statistical physics to study the performance of Gallager type error-correcting codes. In this approach, the transmitted codeword comprises Boolean sums of the original message bits selected by two randomly-constructed sparse matrices. We show that a broad range of these codes potentially saturate Shannon's bound but are limited due to the decoding dynamics used. Other codes show sub-optimal performance but are not restricted by the decoding dynamics. We show how these codes may also be employed as a practical public-key cryptosystem and are of competitive performance to modern cyptographical methods.
Resumo:
We propose a method to determine the critical noise level for decoding Gallager type low density parity check error correcting codes. The method is based on the magnetization enumerator (¸M), rather than on the weight enumerator (¸W) presented recently in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator , rather than on the weight enumerator employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
We combine the replica approach from statistical physics with a variational approach to analyze learning curves analytically. We apply the method to Gaussian process regression. As a main result we derive approximative relations between empirical error measures, the generalization error and the posterior variance.
Resumo:
The replica method, developed in statistical physics, is employed in conjunction with Gallager's methodology to accurately evaluate zero error noise thresholds for Gallager code ensembles. Our approach generally provides more optimistic evaluations than those reported in the information theory literature for sparse matrices; the difference vanishes as the parity check matrix becomes dense.
Resumo:
We present a theoretical method for a direct evaluation of the average error exponent in Gallager error-correcting codes using methods of statistical physics. Results for the binary symmetric channel(BSC)are presented for codes of both finite and infinite connectivity.
Resumo:
The problem of resource allocation in sparse graphs with real variables is studied using methods of statistical physics. An efficient distributed algorithm is devised on the basis of insight gained from the analysis and is examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.
Resumo:
The optimization of resource allocation in sparse networks with real variables is studied using methods of statistical physics. Efficient distributed algorithms are devised on the basis of insight gained from the analysis and are examined using numerical simulations, showing excellent performance and full agreement with the theoretical results.
Resumo:
We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel are presented for codes of both finite and infinite connectivity.
Resumo:
A study is reported on the deactivation of hydroprocessing catalysts and their reactivation by the removal of coke and metal foulants. The literature on hydrotreating catalyst deactivation by coke and metals deposition, the environmental problems associated with spent catalyst disposal, and its reactivation/rejuvenation process were reviewed. Experimental studies on catalyst deactivation involved problem analysis in industrial hydroprocessing operations, through characterization of the spent catalyst, and laboratory coking studies. A comparison was made between the characteristics of spent catalysts from fixed bed and ebullating bed residue hydroprocessing reactor units and the catalyst deactivation pattern in both types of reactor systems was examined. In the laboratory the nature of initial coke deposited on the catalyst surface and its role on catalyst deactivation were studied. The influence of initial coke on catalyst surface area and porosity was significant. Both catalyst acidity and feedstock quality had a remarkable influence on the amount and the nature of the initial coke. The hydroenitrogenation function (HDN) of the catalyst was found to be deactivated more rapidly by the initial coke than the hydrodesulphurization function (HDS). In decoking experiments, special attention was paid to the initial conditions of coke combustion, since the early stages of contact between the coke on the spent catalyst surface and the oxygen are crucial in the decoking process. An increase in initial combustion temperature above 440oC and the oxygen content of the regeneration gas above 5% vanadium led to considerable sintering of the catalyst. At temperatures above 700oC there was a substantial loss of molybdenum from the catalyst, and phase transformations in the alumina support. The preferred leaching route (coked vs decoked form of spent catalyst) and a comparison of different reagents (i.e., oxalic acid and tartaric acid) and promoters (i.e., Hydrogen Peroxide and Ferric Nitrate) for better selectivity in removing the major foulant (vanadium), characterization and performance evaluation of the treated catalysts and modelling of the leaching process were addressed in spent catalyst rejuvenation studies. The surface area and pore volume increased substantially with increasing vanadium extraction from the spent catalyst; the HDS activity showed a parallel increase. The selectivity for leaching of vanadium deposits was better, and activity recovery was higher, for catalyst rejuvenated by metal leaching prior to decoking.
Resumo:
The compaction behaviour of powders with soft and hard components is of particular interest to the paint processing industry. Unfortunately, at the present time, very little is known about the internal mechanisms within such systems and therefore suitable tests are required to help in the interpretative process. The TRUBAL, Distinct Element Method (D.E.M.) program was the method of investigation used in this study. Steel (hard) and rubber (soft) particles were used in the randomly-generated, binary assemblies because they provided a sharp contrast in physical properties. For reasons of simplicity, isotropic compression of two-dimensional assemblies was also initially considered. The assemblies were first subject to quasi-static compaction, in order to define their behaviour under equilibrium conditions. The stress-strain behaviour of the assemblies under such conditions was found to be adequately described by a second-order polynomial expansion. The structural evolution of the simulation assemblies was also similar to that observed for real powder systems. Further simulation tests were carried out to investigate the effects of particle size on the compaction behaviour of the two-dimensional, binary assemblies. Later work focused on the quasi-static compaction behaviour of three-dimensional assemblies, because they represented more realistic particle systems. The compaction behaviour of the assemblies during the simulation experiments was considered in terms of percolation theory concepts, as well as more familiar macroscopic and microstructural parameters. Percolation theory, which is based on ideas from statistical physics, has been found to be useful in the interpretation of the mechanical behaviour of simple, elastic lattices. However, from the evidence of this study, percolation theory is also able to offer a useful insight into the compaction behaviour of more realistic particle assemblies.
Resumo:
An investigation was undertaken to study the effect of poor curing simulating hot climatic conditions and remedies on the durability of steel in concrete. Three different curing environments were used i.e. (1) Saturated Ca(OH)2 solution at 20°C, (2) Saturated Ca(OH)2 solution at 50°C and (3) Air at 50°C at 30% relative humidity. The third curing condition corresponding to the temperature and relative humidity typical of Middle Eastern Countries. The nature of the hardened cement paste matrix, cured under the above conditions was studied by means of Mercury Intrusion Porosimetry for measuring pore size distribution. The results were represented as total pore volume and initial pore entry diameter. The Scanning Electron Microscope was used to look at morphological changes during hydration, which were compared to the Mercury Intrusion Porosimetry results. X-ray defraction and Differential Thermal Analysis techniques were also employed for looking at any phase transformations. Polymer impregnation was used to reduce the porosity of the hardened cement pastes, especially in the case of the poorly cured samples. Carbonation rates of unimpregnated and impregnated cements were determined. Chloride diffusion studies were also undertaken to establish the effect of polymer impregnation and blending of the cements. Finally the corrosion behaviour of embedded steel bars was determined by the technique of Linear Polarisation. The steel was embedded in both untreated and polymer impregnated hardened cement pastes placed in either a solution containing NaCl or an environmental cabinet which provided carbonation at 40°C and 50% relative humidity.
Resumo:
This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.
Resumo:
The problem of learning by examples in ultrametric committee machines (UCMs) is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error in UCMs with L hidden layers and for a large enough number of units. In most of the regimes studied we find that the generalization error, as a function of the number of examples presented, develops a discontinuous drop at a critical value of the load parameter. We also find that when L>1 a number of teacher networks with the same number of hidden layers and different overlaps induce learning processes with the same critical points.
Resumo:
Measurement of lung ventilation is one of the most reliable techniques in diagnosing pulmonary diseases. The time-consuming and bias-prone traditional methods using hyperpolarized H 3He and 1H magnetic resonance imageries have recently been improved by an automated technique based on 'multiple active contour evolution'. This method involves a simultaneous evolution of multiple initial conditions, called 'snakes', eventually leading to their 'merging' and is entirely independent of the shapes and sizes of snakes or other parametric details. The objective of this paper is to show, through a theoretical analysis, that the functional dynamics of merging as depicted in the active contour method has a direct analogue in statistical physics and this explains its 'universality'. We show that the multiple active contour method has an universal scaling behaviour akin to that of classical nucleation in two spatial dimensions. We prove our point by comparing the numerically evaluated exponents with an equivalent thermodynamic model. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.