8 resultados para Error correction methods

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mentre si svolgono operazioni su dei qubit, possono avvenire vari errori, modificando così l’informazione da essi contenuta. La Quantum Error Correction costruisce algoritmi che permettono di tollerare questi errori e proteggere l’informazione che si sta elaborando. Questa tesi si focalizza sui codici a 3 qubit, che possono correggere un errore di tipo bit-flip o un errore di tipo phase-flip. Più precisamente, all’interno di questi algoritmi, l’attenzione è posta sulla procedura di encoding, che punta a proteggere meglio dagli errori l’informazione contenuta da un qubit, e la syndrome measurement, che specifica su quale qubit è avvenuto un errore senza alterare lo stato del sistema. Inoltre, sfruttando la procedura della syndrome measurement, è stata stimata la probabilità di errore di tipo bit-flip e phase-flip su un qubit attraverso l’utilizzo della IBM quantum experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A way to investigate turbulence is through experiments where hot wire measurements are performed. Analysis of the in turbulence of a temperature gradient on hot wire measurements is the aim of this thesis work. Actually - to author's knowledge - this investigation is the first attempt to document, understand and ultimately correct the effect of temperature gradients on turbulence statistics. However a numerical approach is used since instantaneous temperature and streamwise velocity fields are required to evaluate this effect. A channel flow simulation at Re_tau = 180 is analyzed to make a first evaluation of the amount of error introduced by temperature gradient inside the domain. Hot wire data field is obtained processing the numerical flow field through the application of a proper version of the King's law, which connect voltage, velocity and temperature. A drift in mean streamwise velocity profile and rms is observed when temperature correction is performed by means of centerline temperature. A correct mean velocity pro�le is achieved correcting temperature through its mean value at each wall normal position, but a not negligible error is still present into rms. The key point to correct properly the sensed velocity from the hot wire is the knowledge of the instantaneous temperature field. For this purpose three correction methods are proposed. At the end a numerical simulation at Re_tau =590 is also evaluated in order to confirm the results discussed earlier.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for the evaluation of the efficiency of parabolic trough collectors, called Rapid Test Method, is investigated at the Solar Institut Jülich. The basic concept is to carry out measurements under stagnation conditions. This allows a fast and inexpensive process due to the fact that no working fluid is required. With this approach, the temperature reached by the inner wall of the receiver is assumed to be the stagnation temperature and hence the average temperature inside the collector. This leads to a systematic error which can be rectified through the introduction of a correction factor. A model of the collector is simulated with COMSOL Multipyisics to study the size of the correction factor depending on collector geometry and working conditions. The resulting values are compared with experimental data obtained at a test rig at the Solar Institut Jülich. These results do not match with the simulated ones. Consequentially, it was not pos-sible to verify the model. The reliability of both the model with COMSOL Multiphysics and of the measurements are analysed. The influence of the correction factor on the rapid test method is also studied, as well as the possibility of neglecting it by measuring the receiver’s inner wall temperature where it receives the least amount of solar rays. The last two chapters analyse the specific heat capacity as a function of pressure and tem-perature and present some considerations about the uncertainties on the efficiency curve obtained with the Rapid Test Method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing in resolution of numerical weather prediction models has allowed more and more realistic forecasts of atmospheric parameters. Due to the growing variability into predicted fields the traditional verification methods are not always able to describe the model ability because they are based on a grid-point-by-grid-point matching between observation and prediction. Recently, new spatial verification methods have been developed with the aim of show the benefit associated to the high resolution forecast. Nested in among of the MesoVICT international project, the initially aim of this work is to compare the newly tecniques remarking advantages and disadvantages. First of all, the MesoVICT basic examples, represented by synthetic precipitation fields, have been examined. Giving an error evaluation in terms of structure, amplitude and localization of the precipitation fields, the SAL method has been studied more thoroughly respect to the others approaches with its implementation in the core cases of the project. The verification procedure has concerned precipitation fields over central Europe: comparisons between the forecasts performed by the 00z COSMO-2 model and the VERA (Vienna Enhanced Resolution Analysis) have been done. The study of these cases has shown some weaknesses of the methodology examined; in particular has been highlighted the presence of a correlation between the optimal domain size and the extention of the precipitation systems. In order to increase ability of SAL, a subdivision of the original domain in three subdomains has been done and the method has been applied again. Some limits have been found in cases in which at least one of the two domains does not show precipitation. The overall results for the subdomains have been summarized on scatter plots. With the aim to identify systematic errors of the model the variability of the three parameters has been studied for each subdomain.