5 resultados para Matched-Pair Analysis

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Il quark top è una delle particelle fondamentali del Modello Standard, ed è osservato a LHC nelle collisioni a più elevata energia. In particolare, la coppia top-antitop (tt̄) è prodotta tramite interazione forte da eventi gluone-gluone (gg) oppure collisioni di quark e antiquark (qq̄). I diversi meccanismi di produzione portano ad avere coppie con proprietà diverse: un esempio è lo stato di spin di tt̄, che vicino alla soglia di produzione è maggiormente correlato nel caso di un evento gg. Uno studio che voglia misurare l’entità di tali correlazioni risulta quindi essere significativamente facilitato da un metodo di discriminazione delle coppie risultanti sulla base del loro canale di produzione. Il lavoro qui presentato ha quindi lo scopo di ottenere uno strumento per effettuare tale differenziazione, attraverso l’uso di tecniche di analisi multivariata. Tali metodi sono spesso applicati per separare un segnale da un fondo che ostacola l’analisi, in questo caso rispettivamente gli eventi gg e qq̄. Si dice che si ha a che fare con un problema di classificazione. Si è quindi studiata la prestazione di diversi algoritmi di analisi, prendendo in esame le distribuzioni di numerose variabili associate al processo di produzione di coppie tt̄. Si è poi selezionato il migliore in base all’efficienza di riconoscimento degli eventi di segnale e alla reiezione degli eventi di fondo. Per questo elaborato l’algoritmo più performante è il Boosted Decision Trees, che permette di ottenere da un campione con purezza iniziale 0.81 una purezza finale di 0.92, al costo di un’efficienza ridotta a 0.74.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Large Hadron Collider, located at the CERN laboratories in Geneva, is the largest particle accelerator in the world. One of the main research fields at LHC is the study of the Higgs boson, the latest particle discovered at the ATLAS and CMS experiments. Due to the small production cross section for the Higgs boson, only a substantial statistics can offer the chance to study this particle properties. In order to perform these searches it is desirable to avoid the contamination of the signal signature by the number and variety of the background processes produced in pp collisions at LHC. Much account assumes the study of multivariate methods which, compared to the standard cut-based analysis, can enhance the signal selection of a Higgs boson produced in association with a top quark pair through a dileptonic final state (ttH channel). The statistics collected up to 2012 is not sufficient to supply a significant number of ttH events; however, the methods applied in this thesis will provide a powerful tool for the increasing statistics that will be collected during the next LHC data taking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spectrum of radiofrequency is distributed in such a way that it is fixed to certain users called licensed users and it cannot be used by unlicensed users even though the spectrum is not in use. This inefficient use of spectrum leads to spectral holes. To overcome the problem of spectral holes and increase the efficiency of the spectrum, Cognitive Radio (CR) was used and all simulation work was done on MATLAB. Here analyzed the performance of different spectrum sensing techniques as Match filter based spectrum sensing and energy detection, which depend on various factors, systems such as Numbers of input, signal-to-noise ratio ( SNR Ratio), QPSK system and BPSK system, and different fading channels, to identify the best possible channels and systems for spectrum sensing and improving the probability of detection. The study resulted that an averaging filter being better than an IIR filter. As the number of inputs and SNR increased, the probability of detection also improved. The Rayleigh fading channel has a better performance compared to the Rician and Nakagami fading channel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radio relics are one of the different types of diffuse radio sources present in a fraction of galaxy clusters. They are characterized by elongated arc-like shapes, with sizes that range between 0.5 and 2 Mpc, and highly polarized emission (up to ∼60%) at GHz frequencies The linearly polarized radiation of relics, moving through a magnetized plasma which is the ICM, is affected by the rotation of the linear polarization vector. This effect, known as “Faraday rotation”, can cause depolarization. The study of this effect allows us to constrain the magnetic field projected along the line of sight. The aim of this thesis work is to constrain the magnetic field intensity and distribution in the periphery of the cluster PSZ2 G096.88+24.18: this cluster hosts a pair of radio relics that can be used for polarization analysis. To analyse the polarization properties of the relics in PSZ2 G096.88+24.18 radio relics we used new Jansky Very Large Array (VLA) observations together with archival observations. The polarization study has been performed using the Rotation Measure Synthesis technique, which allows us to recover polarization, minimizing the bandwidth depolarization. Thanks to this technique, we recovered more polarization from the southern relic (with respect to provious works), We studied also the depolarization trend with the resolution for the southern relic, and found that the polarization fraction decreases with the beamsize. Finally, we have produced simulated magnetic fields models, varying the auto-correlation lengths of the magnetic field, in order to reproduce the observed depolarization trend in the southern relic. Comparing our observational results and model predictions, we were able to constrain the scales over which the turbulent magnetic field varies within the cluster. We conclude that the depolarization observed in the southern relic is likely due to external depolarization caused by the magnetized ICM distribution within the cluster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis contributes to the ArgMining 2021 shared task on Key Point Analysis. Key Point Analysis entails extracting and calculating the prevalence of a concise list of the most prominent talking points, from an input corpus. These talking points are usually referred to as key points. Key point analysis is divided into two subtasks: Key Point Matching, which involves assigning a matching score to each key point/argument pair, and Key Point Generation, which consists of the generation of key points. The task of Key Point Matching was approached using different models: a pretrained Sentence Transformers model and a tree-constrained Graph Neural Network were tested. The best model was the fine-tuned Sentence Transformers, which achieved a mean Average Precision score of 0.75, ranking 12 compared to other participating teams. The model was then used for the subtask of Key Point Generation using the extractive method in the selection of key point candidates and the model developed for the previous subtask to evaluate them.