1000 resultados para Ternary Linear Codes
Resumo:
Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.
Resumo:
The rate of solvolysis of p-nitrophenyl phosphate (PNPP) dianion in DMSO/water strongly decreases by increasing water concentration. Addition of linear alcohols (methanol, propanol, butanol, pentanol, and hexanol) at constant DMSO/water molar ratio produced an even sharper rate decrease. Alkyl phosphate formation, resulting from PNPP solvolysis in ternary DMSO/water/alcohol mixtures, increased with alcohol concentration and was essentially temperature independent. Methanol and hexanol were the poorest nucleophiles under all conditions. Activation energies and enthalpies for solvolysis in ternary mixtures were similar and entropies varied with alcohol concentration. Taken together these results can be best interpreted in terms of a dissociative mechanism with the intervention of metaphosphate. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
BEAMnrc, a code for simulating medical linear accelerators based on EGSnrc, has been bench-marked and used extensively in the scientific literature and is therefore often considered to be the gold standard for Monte Carlo simulations for radiotherapy applications. However, its long computation times make it too slow for the clinical routine and often even for research purposes without a large investment in computing resources. VMC++ is a much faster code thanks to the intensive use of variance reduction techniques and a much faster implementation of the condensed history technique for charged particle transport. A research version of this code is also capable of simulating the full head of linear accelerators operated in photon mode (excluding multileaf collimators, hard and dynamic wedges). In this work, a validation of the full head simulation at 6 and 18 MV is performed, simulating with VMC++ and BEAMnrc the addition of one head component at a time and comparing the resulting phase space files. For the comparison, photon and electron fluence, photon energy fluence, mean energy, and photon spectra are considered. The largest absolute differences are found in the energy fluences. For all the simulations of the different head components, a very good agreement (differences in energy fluences between VMC++ and BEAMnrc <1%) is obtained. Only a particular case at 6 MV shows a somewhat larger energy fluence difference of 1.4%. Dosimetrically, these phase space differences imply an agreement between both codes at the <1% level, making VMC++ head module suitable for full head simulations with considerable gain in efficiency and without loss of accuracy.
Linear global instability of non-orthogonal incompressible swept attachment-line boundary layer flow
Resumo:
Instability of the orthogonal swept attachment line boundary layer has received attention by local1, 2 and global3–5 analysis methods over several decades, owing to the significance of this model to transition to turbulence on the surface of swept wings. However, substantially less attention has been paid to the problem of laminar flow instability in the non-orthogonal swept attachment-line boundary layer; only a local analysis framework has been employed to-date.6 The present contribution addresses this issue from a linear global (BiGlobal) instability analysis point of view in the incompressible regime. Direct numerical simulations have also been performed in order to verify the analysis results and unravel the limits of validity of the Dorrepaal basic flow7 model analyzed. Cross-validated results document the effect of the angle _ on the critical conditions identified by Hall et al.1 and show linear destabilization of the flow with decreasing AoA, up to a limit at which the assumptions of the Dorrepaal model become questionable. Finally, a simple extension of the extended G¨ortler-H¨ammerlin ODE-based polynomial model proposed by Theofilis et al.4 is presented for the non-orthogonal flow. In this model, the symmetries of the three-dimensional disturbances are broken by the non-orthogonal flow conditions. Temporal and spatial one-dimensional linear eigenvalue codes were developed, obtaining consistent results with BiGlobal stability analysis and DNS. Beyond the computational advantages presented by the ODE-based model, it allows us to understand the functional dependence of the three-dimensional disturbances in the non-orthogonal case as well as their connections with the disturbances of the orthogonal stability problem.
Resumo:
The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes
Resumo:
The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.
Resumo:
Assessing wind conditions on complex terrain has become a hard task as terrain complexity increases. That is why there is a need to extrapolate in a reliable manner some wind parameters that determine wind farms viability such as annual average wind speed at all hub heights as well as turbulence intensities. The development of these tasks began in the early 90´s with the widely used linear model WAsP and WAsP Engineering especially designed for simple terrain with remarkable results on them but not so good on complex orographies. Simultaneously non-linearized Navier Stokes solvers have been rapidly developed in the last decade through CFD (Computational Fluid Dynamics) codes allowing simulating atmospheric boundary layer flows over steep complex terrain more accurately reducing uncertainties. This paper describes the features of these models by validating them through meteorological masts installed in a highly complex terrain. The study compares the results of the mentioned models in terms of wind speed and turbulence intensity.
Resumo:
The Department of Structural Analysis of the University of Santander has been for a longtime involved in the solution of the country´s practical engineering problems. Some of these have required the use of non-conventional methods of analysis, in order to achieve adequate engineering answers. As an example of the increasing application of non-linear computer codes in the nowadays engineering practice, some cases will be briefly presented. In each case, only the main features of the problem involved and the solution used to solve it will be shown
Resumo:
In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.
Resumo:
The emergence of digital imaging and of digital networks has made duplication of original artwork easier. Watermarking techniques, also referred to as digital signature, sign images by introducing changes that are imperceptible to the human eye but easily recoverable by a computer program. Usage of error correcting codes is one of the good choices in order to correct possible errors when extracting the signature. In this paper, we present a scheme of error correction based on a combination of Reed-Solomon codes and another optimal linear code as inner code. We have investigated the strength of the noise that this scheme is steady to for a fixed capacity of the image and various lengths of the signature. Finally, we compare our results with other error correcting techniques that are used in watermarking. We have also created a computer program for image watermarking that uses the newly presented scheme for error correction.
Resumo:
The maximal cardinality of a code W on the unit sphere in n dimensions with (x, y) ≤ s whenever x, y ∈ W, x 6= y, is denoted by A(n, s). We use two methods for obtaining new upper bounds on A(n, s) for some values of n and s. We find new linear programming bounds by suitable polynomials of degrees which are higher than the degrees of the previously known good polynomials due to Levenshtein [11, 12]. Also we investigate the possibilities for attaining the Levenshtein bounds [11, 12]. In such cases we find the distance distributions of the corresponding feasible maximal spherical codes. Usually this leads to a contradiction showing that such codes do not exist.
Resumo:
2010 Mathematics Subject Classification: 97D40, 97M10, 97M40, 97N60, 97N80, 97R80
Resumo:
We propose weakly-constrained stream and block codes with tunable pattern-dependent statistics and demonstrate that the block code capacity at large block sizes is close to the the prediction obtained from a simple Markov model published earlier. We demonstrate the feasibility of the code by presenting original encoding and decoding algorithms with a complexity log-linear in the block size and with modest table memory requirements. We also show that when such codes are used for mitigation of patterning effects in optical fibre communications, a gain of about 0.5dB is possible under realistic conditions, at the expense of small redundancy 10%). © 2006 IEEE.
Resumo:
Many U.S. students do not perform well on mathematics assessments with respect to algebra topics such as linear functions, a building-block for other functions. Poor achievement of U.S. middle school students in this topic is a problem. U.S. eighth graders have had average mathematics scores on international comparison tests such as Third International Mathematics Science Study, later known as Trends in Mathematics and Science Study, (TIMSS)-1995, -99, -03, while Singapore students have had highest average scores. U.S. eighth grade average mathematics scores improved on TIMMS-2007 and held steady onTIMMS-2011. Results from national assessments, PISA 2009 and 2012 and National Assessment of Educational Progress of 2007, 2009, and 2013, showed a lack of proficiency in algebra. Results of curriculum studies involving nations in TIMSS suggest that elementary textbooks in high-scoring countries were different than elementary textbooks and middle grades texts were different with respect to general features in the U.S. The purpose of this study was to compare treatments of linear functions in Singapore and U.S. middle grades mathematics textbooks. Results revealed features currently in textbooks. Findings should be valuable to constituencies who wish to improve U.S. mathematics achievement. Portions of eight Singapore and nine U.S. middle school student texts pertaining to linear functions were compared with respect to 22 features in three categories: (a) background features, (b) general features of problems, and (c) specific characterizations of problem practices, problem-solving competency types, and transfer of representation. Features were coded using a codebook developed by the researcher. Tallies and percentages were reported. Welch's t-tests and chi-square tests were used, respectively, to determine whether texts differed significantly for the features and if codes were independent of country. U.S. and Singapore textbooks differed in page appearance and number of pages, problems, and images. Texts were similar in problem appearance. Differences in problems related to assessment of conceptual learning. U.S. texts contained more problems requiring (a) use of definitions, (b) single computation, (c) interpreting, and (d) multiple responses. These differences may stem from cultural differences seen in attitudes toward education. Future studies should focus on density of page, spiral approach, and multiple response problems.
Resumo:
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.