11 resultados para Mathematical Techniques--Error Analysis

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation is to show the power of contrastive analysis in successfully predicting the errors a language learner will make by means of a concrete case study. First, there is a description of what language transfer is and why it is important in the matter of second language acquisition. Second, a brief explanation of the history and development of contrastive analysis will be offered. Third, the focus of the thesis will move to an analysis of errors usually made by language learners. To conclude, the dissertation will focus on the concrete case study of a Russian learner of English: after an analysis of the errors the student is likely to make, a recorded conversation will be examined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Da ormai sette anni la stazione permanente GPS di Baia Terranova acquisisce dati giornalieri che opportunamente elaborati consentono di contribuire alla comprensione della dinamica antartica e a verificare se modelli globali di natura geofisica siano aderenti all’area di interesse della stazione GPS permanente. Da ricerche bibliografiche condotte si è dedotto che una serie GPS presenta molteplici possibili perturbazioni principalmente dovute a errori nella modellizzazione di alcuni dati ancillari necessari al processamento. Non solo, da alcune analisi svolte, è emerso come tali serie temporali ricavate da rilievi geodetici, siano afflitte da differenti tipologie di rumore che possono alterare, se non opportunamente considerate, i parametri di interesse per le interpretazioni geofisiche del dato. Il lavoro di tesi consiste nel comprendere in che misura tali errori, possano incidere sui parametri dinamici che caratterizzano il moto della stazione permanente, facendo particolare riferimento alla velocità del punto sul quale la stazione è installata e sugli eventuali segnali periodici che possono essere individuati.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixing is a fundamental unit operation in the pharmaceutical industry to ensure consistent product quality across different batches. It is usually carried out in mechanically stirred tanks, with a large variety of designs according to the process requirements. A key aspect of pharmaceutical manufacturing is the extensive and meticulous cleaning of the vessels between runs to prevent the risk of contamination. Single-use reactors represent an increasing trend in the industry since they do not require cleaning and sterilization, reducing the need for utilities such as steam to sterilize equipment and the time between production batches. In contrast to traditional stainless steel vessels, single-use reactors consist of a plastic bag used as a vessel and disposed of after use. This thesis aims to characterize the fluid dynamics features and the mixing performance of a commercially available single-use reactor. The characterization employs a combination of various experimental techniques. The analysis starts with the visual observation of the liquid behavior inside the vessel, focusing on the vortex shape evolution at different impeller speeds. The power consumption is then measured using a torque meter to quantify the power number. Particle Image Velocimetry (PIV) is employed to investigate local fluid dynamics properties such as mean flow field and mean and rms velocity profiles. The same experimental setup of PIV is exploited for another optical measurement technique, the Planar Laser-Induced Fluorescence (PLIF). The PLIF measurements complete the characterization of the reactor with the qualitative visualization of the turbulent flow and the quantitative assessment of the system performance through the mixing time. The results confirm good mixing performances for the single-use reactor over the investigated impeller speeds and reveal that the filling volume plays a significant role in the fluid dynamics of the system.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Il progetto di tesi riguarda principalmente la progettazione di moderni sistemi wireless, come 5G o WiGig, operanti a onde millimetriche, attraverso lo studio di una tecnica avanzata detta Beamforming, che, grazie all'utilizzo di antenne direttive e compatte, permette di superare limiti di link budget dovuti alle alte frequenze e introdurre inoltre diversità spaziale alla comunicazione. L'obiettivo principale del lavoro è stato quello di valutare, tramite simulazioni numeriche, le prestazioni di alcuni diversi schemi di Beamforming integrando come tool di supporto un programma di Ray Tracing capace di fornire le principali informazioni riguardo al canale radio. Con esso infatti è possibile sia effettuare un assessment generale del Beamforming stesso, ma anche formulare i presupposti per innovative soluzioni, chiamate RayTracing-assisted- Beamforming, decisamente promettenti per futuri sviluppi così come confermato dai risultati.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The spectrum of radiofrequency is distributed in such a way that it is fixed to certain users called licensed users and it cannot be used by unlicensed users even though the spectrum is not in use. This inefficient use of spectrum leads to spectral holes. To overcome the problem of spectral holes and increase the efficiency of the spectrum, Cognitive Radio (CR) was used and all simulation work was done on MATLAB. Here analyzed the performance of different spectrum sensing techniques as Match filter based spectrum sensing and energy detection, which depend on various factors, systems such as Numbers of input, signal-to-noise ratio ( SNR Ratio), QPSK system and BPSK system, and different fading channels, to identify the best possible channels and systems for spectrum sensing and improving the probability of detection. The study resulted that an averaging filter being better than an IIR filter. As the number of inputs and SNR increased, the probability of detection also improved. The Rayleigh fading channel has a better performance compared to the Rician and Nakagami fading channel.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the field of industrial automation, there is an increasing need to use optimal control systems that have low tracking errors and low power and energy consumption. The motors we are dealing with are mainly Permanent Magnet Synchronous Motors (PMSMs), controlled by 3 different types of controllers: a position controller, a speed controller, and a current controller. In this thesis, therefore, we are going to act on the gains of the first two controllers by going to find, through the TwinCAT 3 software, what might be the best set of parameters. To do this, starting with the default parameters recommended by TwinCAT, two main methods were used and then compared: the method of Ziegler and Nichols, which is a tabular method, and advanced tuning, an auto-tuning software method of TwinCAT. Therefore, in order to analyse which set of parameters was the best,several experiments were performed for each case, using the Motion Control Function Blocks. Moreover, some machines, such as large robotic arms, have vibration problems. To analyse them in detail, it was necessary to use the Bode Plot tool, which, through Bode plots, highlights in which frequencies there are resonance and anti-resonance peaks. This tool also makes it easier to figure out which and where to apply filters to improve control.