948 resultados para Performance of High Energy Physics detectors


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 pb-1 of data collected in pp collisions at s = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV/c is above 95% over the whole region of pseudorapidity covered by the CMS muon system, < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeVc is higher than 90% over the full η range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100GeV/c and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV/c. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation. © 2012 IOP Publishing Ltd and Sissa Medialab srl.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A description is provided of the software algorithms developed for the CMS tracker both for reconstructing charged-particle trajectories in proton-proton interactions and for using the resulting tracks to estimate the positions of the LHC luminous region and individual primary-interaction vertices. Despite the very hostile environment at the LHC, the performance obtained with these algorithms is found to be excellent. For t (t) over bar events under typical 2011 pileup conditions, the average track-reconstruction efficiency for promptly-produced charged particles with transverse momenta of p(T) > 0.9GeV is 94% for pseudorapidities of vertical bar eta vertical bar < 0.9 and 85% for 0.9 < vertical bar eta vertical bar < 2.5. The inefficiency is caused mainly by hadrons that undergo nuclear interactions in the tracker material. For isolated muons, the corresponding efficiencies are essentially 100%. For isolated muons of p(T) = 100GeV emitted at vertical bar eta vertical bar < 1.4, the resolutions are approximately 2.8% in p(T), and respectively, 10 m m and 30 mu m in the transverse and longitudinal impact parameters. The position resolution achieved for reconstructed primary vertices that correspond to interesting pp collisions is 10-12 mu m in each of the three spatial dimensions. The tracking and vertexing software is fast and flexible, and easily adaptable to other functions, such as fast tracking for the trigger, or dedicated tracking for electrons that takes into account bremsstrahlung.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The 1st chapter of this work presents the different experiments and collaborations in which I am involved during my PhD studies of Physics. Following those descriptions, the 2nd chapter is dedicated to how the radiation affects the silicon sensors, as well as some experimental measurements carried out at CERN (Geneve, Schwitzerland) and IFIC (Valencia, Spain) laboratories. Besides the previous investigation results, this chapter includes the most recent scientific papers appeared in the latest RD50 (Research & Development #50) Status Report, published in January 2007, as well as some others published this year. The 3rd and 4th are dedicated to the simulation of the electrical behavior of solid state detectors. In chapter 3 are reported the results obtained for the illumination of edgeless detectors irradiated at different fluences, in the framework of the TOSTER Collaboration. The 4th chapter reports about simulation design, simulation and fabrication of a novel 3D detector developed at CNM for ions detection in the future ITER fusion reactor. This chapter will be extended with irradiation simulations and experimental measurements in my PhD Thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The DO experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed capabilities of any one institution. Moreover, the widely scattered geographical distribution of DO collaborators poses further serious difficulties for optimal use of human and computing resources. These difficulties will exacerbate in future high energy physics experiments, like the LHC. The computing grid has long been recognized as a solution to these problems. This technology is being made a more immediate reality to end users in DO by developing a grid in the DO Southern Analysis Region (DOSAR), DOSAR-Grid, using a available resources within it and a home-grown local task manager, McFarm. We will present the architecture in which the DOSAR-Grid is implemented, the use of technology and the functionality of the grid, and the experience from operating the grid in simulation, reprocessing and data analyses for a currently running HEP experiment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nei prossimi anni è atteso un aggiornamento sostanziale di LHC, che prevede di aumentare la luminosità integrata di un fattore 10 rispetto a quella attuale. Tale parametro è proporzionale al numero di collisioni per unità di tempo. Per questo, le risorse computazionali necessarie a tutti i livelli della ricostruzione cresceranno notevolmente. Dunque, la collaborazione CMS ha cominciato già da alcuni anni ad esplorare le possibilità offerte dal calcolo eterogeneo, ovvero la pratica di distribuire la computazione tra CPU e altri acceleratori dedicati, come ad esempio schede grafiche (GPU). Una delle difficoltà di questo approccio è la necessità di scrivere, validare e mantenere codice diverso per ogni dispositivo su cui dovrà essere eseguito. Questa tesi presenta la possibilità di usare SYCL per tradurre codice per la ricostruzione di eventi in modo che sia eseguibile ed efficiente su diversi dispositivi senza modifiche sostanziali. SYCL è un livello di astrazione per il calcolo eterogeneo, che rispetta lo standard ISO C++. Questo studio si concentra sul porting di un algoritmo di clustering dei depositi di energia calorimetrici, CLUE, usando oneAPI, l'implementazione SYCL supportata da Intel. Inizialmente, è stato tradotto l'algoritmo nella sua versione standalone, principalmente per prendere familiarità con SYCL e per la comodità di confronto delle performance con le versioni già esistenti. In questo caso, le prestazioni sono molto simili a quelle di codice CUDA nativo, a parità di hardware. Per validare la fisica, l'algoritmo è stato integrato all'interno di una versione ridotta del framework usato da CMS per la ricostruzione. I risultati fisici sono identici alle altre implementazioni mentre, dal punto di vista delle prestazioni computazionali, in alcuni casi, SYCL produce codice più veloce di altri livelli di astrazione adottati da CMS, presentandosi dunque come una possibilità interessante per il futuro del calcolo eterogeneo nella fisica delle alte energie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cover title.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work is an attempt to understand the characteristics of high energy ball milling on the structural, electrical and magnetic properties of some normal spinets in the ultra fine regime, Magnetism and magnetic materials have been a fascinating subject for the mankind ever since the discovery of lodestone. Since then, man has been applying this principle of magnetism to build devices for various applications. Magnetism can be classified broadly into five categories. They are diamagnetic, paramagnetic, ferromagnetic antiferromagnetic and ferrimagnetic. Of these, ferro and ferri magnetic materials assume great commercial importance due to their unique properties like appropriate magnetic characteristics, high resistivity and low eddy current losses. The emergence of nanoscience and nanotechnology during the last decade had its impact in the field of magnetism and magnetic materials too. Now, it is common knowledge that materials synthesized in the nanoregime exhibit novel and superlative properties with respect to their coarser sized counterparts in the micron regime. These studies reveal that dielectric properties can be varied appreciably by high-energy ball milling in nanosized zinc ferrites produced by coprecipitation method. A semi conducting behaviour was observed in these materials with the Oxygen vacancies acting as the main charge carrier for conduction, which was produced at the time of coprecipitation and milling. Thus through this study, it was possible to successfully investigate the finite size effects on the structural, electrical and magnetic properties of normal spinels in the ultra fine regime