924 resultados para Belief Propagation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Low-density parity-check codes with irregular constructions have recently been shown to outperform the most advanced error-correcting codes to date. In this paper we apply methods of statistical physics to study the typical properties of simple irregular codes. We use the replica method to find a phase transition which coincides with Shannon's coding bound when appropriate parameters are chosen. The decoding by belief propagation is also studied using statistical physics arguments; the theoretical solutions obtained are in good agreement with simulation results. We compare the performance of irregular codes with that of regular codes and discuss the factors that contribute to the improvement in performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We employ the methods presented in the previous chapter for decoding corrupted codewords, encoded using sparse parity check error correcting codes. We show the similarity between the equations derived from the TAP approach and those obtained from belief propagation, and examine their performance as practical decoding methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study a variation of the graph coloring problem on random graphs of finite average connectivity. Given the number of colors, we aim to maximize the number of different colors at neighboring vertices (i.e. one edge distance) of any vertex. Two efficient algorithms, belief propagation and Walksat are adapted to carry out this task. We present experimental results based on two types of random graphs for different system sizes and identify the critical value of the connectivity for the algorithms to find a perfect solution. The problem and the suggested algorithms have practical relevance since various applications, such as distributed storage, can be mapped onto this problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Iterative multiuser joint decoding based on exact Belief Propagation (BP) is analyzed in the large system limit by means of the replica method. It is shown that performance can be improved by appropriate power assignment to the users. The optimum power assignment can be found by linear programming in most technically relevant cases. The performance of BP iterative multiuser joint decoding is compared to suboptimum approximations based on Interference Cancellation (IC). While IC receivers show a significant loss for equal-power users, they yield performance close to BP under optimum power assignment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Code division multiple access (CDMA) in which the spreading code assignment to users contains a random element has recently become a cornerstone of CDMA research. The random element in the construction is particularly attractive as it provides robustness and flexibility in utilizing multiaccess channels, whilst not making significant sacrifices in terms of transmission power. Random codes are generated from some ensemble; here we consider the possibility of combining two standard paradigms, sparsely and densely spread codes, in a single composite code ensemble. The composite code analysis includes a replica symmetric calculation of performance in the large system limit, and investigation of finite systems through a composite belief propagation algorithm. A variety of codes are examined with a focus on the high multi-access interference regime. We demonstrate scenarios both in the large size limit and for finite systems in which the composite code has typical performance exceeding those of sparse and dense codes at equivalent signal to noise ratio.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce a general matrix formulation for multiuser channels and analyse the special cases of Multiple-Input Multiple-Output channels, channels with interference and relay arrays under LDPC coding using methods developed for the statistical mechanics of disordered systems. We use the replica method to provide results for the typical overlaps of the original and recovered messages and discuss their implications. The results obtained are consistent with belief propagation and density evolution results but also complement them giving additional insights into the information dynamics of these channels with unexpected effects in some cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Automated ontology population using information extraction algorithms can produce inconsistent knowledge bases. Confidence values assigned by the extraction algorithms may serve as evidence in helping to repair inconsistencies. The Dempster-Shafer theory of evidence is a formalism, which allows appropriate interpretation of extractors’ confidence values. This chapter presents an algorithm for translating the subontologies containing conflicts into belief propagation networks and repairing conflicts based on the Dempster-Shafer plausibility.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cognitive Radio has been proposed as a key technology to significantly improve spectrum usage in wireless networks by enabling unlicensed users to access unused resource. We present new algorithms that are needed for the implementation of opportunistic scheduling policies that maximize the throughput utilization of resources by secondary users, under maximum interference constraints imposed by existing primary users. Our approach is based on the Belief Propagation (BP) algorithm, which is advantageous due to its simplicity and potential for distributed implementation. We examine convergence properties and evaluate the performance of the proposed BP algorithms via simulations and demonstrate that the results compare favorably with a benchmark greedy strategy. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Supply chain formation (SCF) is the process of determining the set of participants and exchange relationships within a network with the goal of setting up a supply chain that meets some predefined social objective. Many proposed solutions for the SCF problem rely on centralized computation, which presents a single point of failure and can also lead to problems with scalability. Decentralized techniques that aid supply chain emergence offer a more robust and scalable approach by allowing participants to deliberate between themselves about the structure of the optimal supply chain. Current decentralized supply chain emergence mechanisms are only able to deal with simplistic scenarios in which goods are produced and traded in single units only and without taking into account production capacities or input-output ratios other than 1:1. In this paper, we demonstrate the performance of a graphical inference technique, max-sum loopy belief propagation (LBP), in a complex multiunit unit supply chain emergence scenario which models additional constraints such as production capacities and input-to-output ratios. We also provide results demonstrating the performance of LBP in dynamic environments, where the properties and composition of participants are altered as the algorithm is running. Our results suggest that max-sum LBP produces consistently strong solutions on a variety of network structures in a multiunit problem scenario, and that performance tends not to be affected by on-the-fly changes to the properties or composition of participants.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Testing for two-sample differences is challenging when the differences are local and only involve a small portion of the data. To solve this problem, we apply a multi- resolution scanning framework that performs dependent local tests on subsets of the sample space. We use a nested dyadic partition of the sample space to get a collection of windows and test for sample differences within each window. We put a joint prior on the states of local hypotheses that allows both vertical and horizontal message passing among the partition tree to reflect the spatial dependency features among windows. This information passing framework is critical to detect local sample differences. We use both the loopy belief propagation algorithm and MCMC to get the posterior null probability on each window. These probabilities are then used to report sample differences based on decision procedures. Simulation studies are conducted to illustrate the performance. Multiple testing adjustment and convergence of the algorithms are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this second part of our comparative study inspecting the (dis)similarities between “Stokes” and “Jones,” we present simulation results yielded by two independent Monte Carlo programs: (i) one developed in Bern with the Jones formalism and (ii) the other implemented in Ulm with the Stokes notation. The simulated polarimetric experiments involve suspensions of polystyrene spheres with varying size. Reflection and refraction at the sample/air interfaces are also considered. Both programs yield identical results when propagating pure polarization states, yet, with unpolarized illumination, second order statistical differences appear, thereby highlighting the pre-averaged nature of the Stokes parameters. This study serves as a validation for both programs and clarifies the misleading belief according to which “Jones cannot treat depolarizing effects.”

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compartmental epidemiological models have been developed since the 1920s and successfully applied to study the propagation of infectious diseases. Besides, due to their structure, in the 1960s an interesting version of these models was developed to clarify some aspects of rumor propagation, considering that spreading an infectious disease or disseminating information is analogous phenomena. Here, in an analogy with the SIR (Susceptible-Infected-Removed) epidemiological model, the ISS (Ignorant-Spreader-Stifler) rumor spreading model is studied. By using concepts from the Dynamical Systems Theory, stability of equilibrium points is established, according to propagation parameters and initial conditions. Some numerical experiments are conducted in order to validate the model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decade the Sznajd model has been successfully employed in modeling some properties and scale features of both proportional and majority elections. We propose a version of the Sznajd model with a generalized bounded confidence rule-a rule that limits the convincing capability of agents and that is essential to allow coexistence of opinions in the stationary state. With an appropriate choice of parameters it can be reduced to previous models. We solved this model both in a mean-field approach (for an arbitrary number of opinions) and numerically in a Barabaacutesi-Albert network (for three and four opinions), studying the transient and the possible stationary states. We built the phase portrait for the special cases of three and four opinions, defining the attractors and their basins of attraction. Through this analysis, we were able to understand and explain discrepancies between mean-field and simulation results obtained in previous works for the usual Sznajd model with bounded confidence and three opinions. Both the dynamical system approach and our generalized bounded confidence rule are quite general and we think it can be useful to the understanding of other similar models.