9 resultados para Performance of High Energy Physics detectors

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nei prossimi anni è atteso un aggiornamento sostanziale di LHC, che prevede di aumentare la luminosità integrata di un fattore 10 rispetto a quella attuale. Tale parametro è proporzionale al numero di collisioni per unità di tempo. Per questo, le risorse computazionali necessarie a tutti i livelli della ricostruzione cresceranno notevolmente. Dunque, la collaborazione CMS ha cominciato già da alcuni anni ad esplorare le possibilità offerte dal calcolo eterogeneo, ovvero la pratica di distribuire la computazione tra CPU e altri acceleratori dedicati, come ad esempio schede grafiche (GPU). Una delle difficoltà di questo approccio è la necessità di scrivere, validare e mantenere codice diverso per ogni dispositivo su cui dovrà essere eseguito. Questa tesi presenta la possibilità di usare SYCL per tradurre codice per la ricostruzione di eventi in modo che sia eseguibile ed efficiente su diversi dispositivi senza modifiche sostanziali. SYCL è un livello di astrazione per il calcolo eterogeneo, che rispetta lo standard ISO C++. Questo studio si concentra sul porting di un algoritmo di clustering dei depositi di energia calorimetrici, CLUE, usando oneAPI, l'implementazione SYCL supportata da Intel. Inizialmente, è stato tradotto l'algoritmo nella sua versione standalone, principalmente per prendere familiarità con SYCL e per la comodità di confronto delle performance con le versioni già esistenti. In questo caso, le prestazioni sono molto simili a quelle di codice CUDA nativo, a parità di hardware. Per validare la fisica, l'algoritmo è stato integrato all'interno di una versione ridotta del framework usato da CMS per la ricostruzione. I risultati fisici sono identici alle altre implementazioni mentre, dal punto di vista delle prestazioni computazionali, in alcuni casi, SYCL produce codice più veloce di altri livelli di astrazione adottati da CMS, presentandosi dunque come una possibilità interessante per il futuro del calcolo eterogeneo nella fisica delle alte energie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since its discovery, top quark has represented one of the most investigated field in particle physics. The aim of this thesis is the reconstruction of hadronic top with high transverse momentum (boosted) with the Template Overlap Method (TOM). Because of the high energy, the decay products of boosted tops are partially or totally overlapped and thus they are contained in a single large radius jet (fat-jet). TOM compares the internal energy distributions of the candidate fat-jet to a sample of tops obtained by a MC simulation (template). The algorithm is based on the definition of an overlap function, which quantifies the level of agreement between the fat-jet and the template, allowing an efficient discrimination of signal from the background contributions. A working point has been decided in order to obtain a signal efficiency close to 90% and a corresponding background rejection at 70%. TOM performances have been tested on MC samples in the muon channel and compared with the previous methods present in literature. All the methods will be merged in a multivariate analysis to give a global top tagging which will be included in ttbar production differential cross section performed on the data acquired in 2012 at sqrt(s)=8 TeV in high phase space region, where new physics processes could be possible. Due to its peculiarity to increase the pT, the Template Overlap Method will play a crucial role in the next data taking at sqrt(s)=13 TeV, where the almost totality of the tops will be produced at high energy, making the standard reconstruction methods inefficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the early 1970 the community has started to realize that have as a main principle the industry one, with the oblivion of the people and health conditions and of the world in general, it could not be a guideline principle. The sea, as an energy source, has the characteristic of offering different types of exploitation, in this project the focus is on the wave energy. Over the last 15 years the Countries interested in the renewable energies grew. Therefore many devices have came out, first in the world of research, then in the commercial one; these converters are able to achieve an energy transformation into electrical energy. The purpose of this work is to analyze the efficiency of a new wave energy converter, called WavePiston, with the aim of determine the feasibility of its actual application in different wave conditions: from the energy sea state of the North Sea, to the more quiet of the Mediterranean Sea. The evaluation of the WavePiston is based on the experimental investigation conducted at the University of Aalborg, in Denmark; and on a numerical modelling of the device in question, to ascertain its efficiency regardless the laboratory results. The numerical model is able to predict the laboratory condition, but it is not yet a model which can be used for any installation, in fact no mooring or economical aspect are included yet. È dai primi anni del 1970 che si è iniziato a capire che il solo principio dell’industria con l’incuranza delle condizioni salutari delle persone e del mondo in generale non poteva essere un principio guida. Il mare, come fonte energetica, ha la caratteristica di offrire diverse tipologie di sfruttamento, in questo progetto è stata analizzata l’energia da onda. Negli ultimi 15 anni sono stati sempre più in aumento i Paesi interessati in questo ambito e di conseguenza, si sono affacciati, prima nel mondo della ricerca, poi in quello commerciale, sempre più dispositivi atti a realizzare questa trasformazione energetica. Di tali convertitori di energia ondosa ne esistono diverse classificazioni. Scopo di tale lavoro è analizzare l’efficienza di un nuovo convertitore di energia ondosa, chiamato WavePiston, al fine si stabilire la fattibilità di una sua reale applicazione in diverse condizioni ondose: dalle più energetiche del Mare del Nord, alle più quiete del Mar Mediterraneo. La valutazione sul WavePiston è basata sullo studio sperimentale condotto nell’Università di Aalborg, in Danimarca; e su di una modellazione numerica del dispositivo stesso, al fine di conoscerne l’efficienza a prescindere dalla possibilità di avere risultati di laboratorio. Il modello numerico è in grado di predirre le condizioni di laboratorio, ma non considera ancora elementi come gli ancoraggi o valutazione dei costi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I Nuclei Galattici Attivi (AGN) sono sorgenti luminose e compatte alimentate dall'accrescimento di materia sul buco nero supermassiccio al centro di una galassia. Una frazione di AGN, detta "radio-loud", emette fortemente nel radio grazie a getti relativistici accelerati dal buco nero. I Misaligned AGN (MAGN) sono sorgenti radio-loud il cui getto non è allineato con la nostra linea di vista (radiogalassie e SSRQ). La grande maggioranza delle sorgenti extragalattiche osservate in banda gamma sono blazar, mentre, in particolare in banda TeV, abbiamo solo 4 MAGN osservati. Lo scopo di questa tesi è valutare l'impatto del Cherenkov Telescope Array (CTA), il nuovo strumento TeV, sugli studi di MAGN. Dopo aver studiato le proprietà dei 4 MAGN TeV usando dati MeV-GeV dal telescopio Fermi e dati TeV dalla letteratura, abbiamo assunto come candidati TeV i MAGN osservati da Fermi. Abbiamo quindi simulato 50 ore di osservazioni CTA per ogni sorgente e calcolato la loro significatività. Assumendo una estrapolazione diretta dello spettro Fermi, prevediamo la scoperta di 9 nuovi MAGN TeV con il CTA, tutte sorgenti locali di tipo FR I. Applicando un cutoff esponenziale a 100 GeV, come forma spettrale più realistica secondo i dati osservativi, prevediamo la scoperta di 2-3 nuovi MAGN TeV. Per quanto riguarda l'analisi spettrale con il CTA, secondo i nostri studi sarà possibile ottenere uno spettro per 5 nuove sorgenti con tempi osservativi dell'ordine di 250 ore. In entrambi i casi, i candidati migliori risultano essere sempre sorgenti locali (z<0.1) e con spettro Fermi piatto (Gamma<2.2). La migliore strategia osservativa per ottenere questi risultati non corrisponde con i piani attuali per il CTA che prevedono una survey non puntata, in quanto queste sorgenti sono deboli, e necessitano di lunghe osservazioni puntate per essere rilevate (almeno 50 ore per studi di flusso integrato e 250 per studi spettrali).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radiation dosimetry is crucial in many fields, where the exposure of ionizing radiation must be precisely controlled to avoid health and environmental safety issues. Radiotherapy and radioprotection are two examples in which fast and reliable detectors are needed. Compact and large area wearable detectors are being developed to address real-life radiation dosimetry applications, their ideal properties include flexibility, lightness, and low-cost. This thesis contributed to the development of Radiation sensitive OXide Field Effect Transistors (ROXFETs), which are detectors able to provide fast and real-time radiation read out. ROXFETs are based on thin film transistors fabricated with high-mobility amorphous oxide semiconductor, making them compatible with large area, flexible, and low cost production over plastic substrates. The gate dielectric material has high dielectric constant and high atomic number, which results in high performances and high radiation sensitivity, respectively. The aim of this work was to establish a stable and reliable fabrication process for ROXFETs made with atomic layer deposited gate dielectric. A study on the effect of gate dielectric materials was performed, focusing the attention on the properties of the dielectric-semiconductor interface. Single and multi layer dielectric structures were compared during this work. Furthermore, the effect of annealing temperature was studied. The device performances were tested to understand the underlying physical processes. In this way, it was possible to determine a reliable fabrication procedure and an optimal structure for ROXFETs. An outstanding sensitivity of (65±3)V/Gy was measured in detectors with a bi-layer Ta₂O₅-Al₂O₃ gate dielectric with low temperature annealing performed at 180°C.