858 resultados para Practical Error Estimator


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the magnetization enumerator method, we evaluate the practical and theoretical limitations of symmetric channels with real outputs. Results are presented for several regular Gallager code constructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regression problems are concerned with predicting the values of one or more continuous quantities, given the values of a number of input variables. For virtually every application of regression, however, it is also important to have an indication of the uncertainty in the predictions. Such uncertainties are expressed in terms of the error bars, which specify the standard deviation of the distribution of predictions about the mean. Accurate estimate of error bars is of practical importance especially when safety and reliability is an issue. The Bayesian view of regression leads naturally to two contributions to the error bars. The first arises from the intrinsic noise on the target data, while the second comes from the uncertainty in the values of the model parameters which manifests itself in the finite width of the posterior distribution over the space of these parameters. The Hessian matrix which involves the second derivatives of the error function with respect to the weights is needed for implementing the Bayesian formalism in general and estimating the error bars in particular. A study of different methods for evaluating this matrix is given with special emphasis on the outer product approximation method. The contribution of the uncertainty in model parameters to the error bars is a finite data size effect, which becomes negligible as the number of data points in the training set increases. A study of this contribution is given in relation to the distribution of data in input space. It is shown that the addition of data points to the training set can only reduce the local magnitude of the error bars or leave it unchanged. Using the asymptotic limit of an infinite data set, it is shown that the error bars have an approximate relation to the density of data in input space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a phase locking scheme that enables the demonstration of a practical dual pump degenerate phase sensitive amplifier for 10 Gbit/s non-return to zero amplitude shift keying signals. The scheme makes use of cascaded Mach Zehnder modulators for creating the pump frequencies as well as of injection locking for extracting the signal carrier and synchronizing the local lasers. An in depth optimization study has been performed, based on measured error rate performance, and the main degradation factors have been identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a free space quantum cryptography system which is designed to allow continuous unattended key exchanges for periods of several days, and over ranges of a few kilometres. The system uses a four-laser faint-pulse transmission system running at a pulse rate of 10MHz to generate the required four alternative polarization states. The receiver module similarly automatically selects a measurement basis and performs polarization measurements with four avalanche photodiodes. The controlling software can implement the full key exchange including sifting, error correction, and privacy amplification required to generate a secure key.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop the energy norm a-posteriori error estimation for hp-version discontinuous Galerkin (DG) discretizations of elliptic boundary-value problems on 1-irregularly, isotropically refined affine hexahedral meshes in three dimensions. We derive a reliable and efficient indicator for the errors measured in terms of the natural energy norm. The ratio of the efficiency and reliability constants is independent of the local mesh sizes and weakly depending on the polynomial degrees. In our analysis we make use of an hp-version averaging operator in three dimensions, which we explicitly construct and analyze. We use our error indicator in an hp-adaptive refinement algorithm and illustrate its practical performance in a series of numerical examples. Our numerical results indicate that exponential rates of convergence are achieved for problems with smooth solutions, as well as for problems with isotropic corner singularities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the bifurcation problem associated with the steady incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the critical Reynolds number at which a steady pitchfork or Hopf bifurcation occurs when the underlying physical system possesses reflectional or Z_2 symmetry. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to bifurcation problems. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

β-Carotene, zeaxanthin, lutein, β-cryptoxanthin, and lycopene are liposoluble pigments widely distributed in vegetables and fruits and, after ingestion, these compounds are usually detected in human blood plasma. In this study, we evaluated their potential to inhibit hemolysis of human erythrocytes, as mediated by the toxicity of peroxyl radicals (ROO•). Thus, 2,2'-azobis (2-methylpropionamidine) dihydrochloride (AAPH) was used as ROO• generator and the hemolysis assay was carried out in experimental conditions optimized by response surface methodology, and successfully adapted to microplate assay. The optimized conditions were verified at 30 × 10(6) cells/mL, 17 mM of AAPH for 3 h, at which 48 ± 5% of hemolysis was achieved in freshly isolated erythrocytes. Among the tested carotenoids, lycopene (IC(50) = 0.24 ± 0.05 μM) was the most efficient to prevent the hemolysis, followed by β-carotene (0.32 ± 0.02 μM), lutein (0.38 ± 0.02 μM), and zeaxanthin (0.43 ± 0.02 μM). These carotenoids were at least 5 times more effective than quercetin, trolox, and ascorbic acid (positive controls). β-Cryptoxanthin did not present any erythroprotective effect, but rather induced a hemolytic effect at the highest tested concentration (3 μM). These results suggest that selected carotenoids may have potential to act as important erythroprotective agents by preventing ROO•-induced toxicity in human erythrocytes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

77

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJETIVO: O objetivo deste trabalho foi estudar a grandeza practical peak voltage (PPV), determinada a partir da forma de onda de tensão aplicada a tubos radiológicos, e compará-la com algumas definições de kVp para diferentes tipos de geradores: monofásico (onda completa, clínico), trifásico (seis pulsos, clínico) e potencial constante (industrial). MATERIAIS E MÉTODOS: O trabalho envolveu a comparação do PPV medido invasivamente (utilizando um divisor de tensão) com a resposta de dois medidores comerciais não invasivos, além dos valores de outras grandezas usadas para medição da tensão de pico aplicada ao tubo de raios X, e a análise da variação do PPV com a ondulação percentual da tensão (ripple). RESULTADOS: Verificou-se que a diferença entre o PPV e as definições mais comuns de tensão de pico aumenta com o ripple. Os valores de PPV variaram em até 3% e 5%, respectivamente, na comparação entre medições invasivas e não invasivas feitas com os equipamentos trifásico e monofásico. CONCLUSÃO: Os resultados demonstraram que a principal grandeza de influência que afeta o PPV é o ripple da tensão. Adicionalmente, valores de PPV obtidos com medidores não invasivos devem ser avaliados considerando que eles dependem da taxa de aquisição e da forma de onda adquirida pelo instrumento.