988 resultados para high-energy physics
Resumo:
The 1st chapter of this work presents the different experiments and collaborations in which I am involved during my PhD studies of Physics. Following those descriptions, the 2nd chapter is dedicated to how the radiation affects the silicon sensors, as well as some experimental measurements carried out at CERN (Geneve, Schwitzerland) and IFIC (Valencia, Spain) laboratories. Besides the previous investigation results, this chapter includes the most recent scientific papers appeared in the latest RD50 (Research & Development #50) Status Report, published in January 2007, as well as some others published this year. The 3rd and 4th are dedicated to the simulation of the electrical behavior of solid state detectors. In chapter 3 are reported the results obtained for the illumination of edgeless detectors irradiated at different fluences, in the framework of the TOSTER Collaboration. The 4th chapter reports about simulation design, simulation and fabrication of a novel 3D detector developed at CNM for ions detection in the future ITER fusion reactor. This chapter will be extended with irradiation simulations and experimental measurements in my PhD Thesis.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The DO experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed capabilities of any one institution. Moreover, the widely scattered geographical distribution of DO collaborators poses further serious difficulties for optimal use of human and computing resources. These difficulties will exacerbate in future high energy physics experiments, like the LHC. The computing grid has long been recognized as a solution to these problems. This technology is being made a more immediate reality to end users in DO by developing a grid in the DO Southern Analysis Region (DOSAR), DOSAR-Grid, using a available resources within it and a home-grown local task manager, McFarm. We will present the architecture in which the DOSAR-Grid is implemented, the use of technology and the functionality of the grid, and the experience from operating the grid in simulation, reprocessing and data analyses for a currently running HEP experiment.
Resumo:
Cover title.
Resumo:
With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
Nei prossimi anni è atteso un aggiornamento sostanziale di LHC, che prevede di aumentare la luminosità integrata di un fattore 10 rispetto a quella attuale. Tale parametro è proporzionale al numero di collisioni per unità di tempo. Per questo, le risorse computazionali necessarie a tutti i livelli della ricostruzione cresceranno notevolmente. Dunque, la collaborazione CMS ha cominciato già da alcuni anni ad esplorare le possibilità offerte dal calcolo eterogeneo, ovvero la pratica di distribuire la computazione tra CPU e altri acceleratori dedicati, come ad esempio schede grafiche (GPU). Una delle difficoltà di questo approccio è la necessità di scrivere, validare e mantenere codice diverso per ogni dispositivo su cui dovrà essere eseguito. Questa tesi presenta la possibilità di usare SYCL per tradurre codice per la ricostruzione di eventi in modo che sia eseguibile ed efficiente su diversi dispositivi senza modifiche sostanziali. SYCL è un livello di astrazione per il calcolo eterogeneo, che rispetta lo standard ISO C++. Questo studio si concentra sul porting di un algoritmo di clustering dei depositi di energia calorimetrici, CLUE, usando oneAPI, l'implementazione SYCL supportata da Intel. Inizialmente, è stato tradotto l'algoritmo nella sua versione standalone, principalmente per prendere familiarità con SYCL e per la comodità di confronto delle performance con le versioni già esistenti. In questo caso, le prestazioni sono molto simili a quelle di codice CUDA nativo, a parità di hardware. Per validare la fisica, l'algoritmo è stato integrato all'interno di una versione ridotta del framework usato da CMS per la ricostruzione. I risultati fisici sono identici alle altre implementazioni mentre, dal punto di vista delle prestazioni computazionali, in alcuni casi, SYCL produce codice più veloce di altri livelli di astrazione adottati da CMS, presentandosi dunque come una possibilità interessante per il futuro del calcolo eterogeneo nella fisica delle alte energie.
Resumo:
The letters published in the ‘Focus issue on high energy particles and atmospheric processes’ serve to broaden the discussion about the influence of high energy particles on the atmosphere beyond their possible effects on clouds and climate. These letters link climate and meteorological processes with atmospheric electricity, atmospheric chemistry, high energy physics and aerosol science from the smallest molecular cluster ions through to liquid droplets. Progress in such a disparate and complex topic is very likely to benefit from continued interdisciplinary interactions between traditionally distinct science areas.
Resumo:
In this work, the effect of the milling time on the densification of the alumina ceramics with or without 5wt.%Y 2O 3, is evaluated, using high-energy ball milling. The milling was performed with different times of 0, 2, 5 or 10 hours. All powders, milled at different times, were characterized by X-Ray Diffraction presenting a reduction of the crystalline degree and crystallite size as function of the milling time increasing. The powders were compacted by cold uniaxial pressing and sintered at 1550°C-60min. Green density of the compacts presented an increasing as function of the milling time and sintered samples presented evolution on the densification as function of the reduction of the crystallite size of the milled powders. © (2010) Trans Tech Publications.
Resumo:
The present study suggests the use of high energy ball milling to mix (to dope) the phase MgB2 with the AlB2 crystalline structure compound, ZrB2, with the same C32 hexagonal structure than MgB 2, in different concentrations, enabling the maintenance of the crystalline phase structures practically unaffected and the efficient mixture with the dopant. The high energy ball milling was performed with different ball-to-powder ratios. The analysis of the transformation and formation of phases was accomplished by X-ray diffractometry (XRD), using the Rietveld method, and scanning electron microscopy. As the high energy ball milling reduced the crystallinity of the milled compounds, also reducing the size of the particles, the XRD analysis were influenced, and they could be used as comparative and control method of the milling. Aiming the recovery of crystallinity, homogenization and final phase formation, heat treatments were performed, enabling that crystalline phases, changed during milling, could be obtained again in the final product. © (2010) Trans Tech Publications.
Resumo:
We show the results and discussions of the study of a possible suppression of the extragalactic neutrino flux during its propagation due to a nonstandard interaction with a candidate field to dark matter. In particular, we show the study of neutrino interaction with an ultra-light scalar field. It is shown that the extragalactic neutrino flux may be suppressed by such an interaction, leading to a mechanism to reduce the ultra-high energy neutrino flux. We calculate both the cases of non-self-conjugate as well as self-conjugate ultra-light dark matter. In the first case, the suppression is independent of the neutrino and dark matter masses. We conclude that care must be taken when explaining limits on the neutrino flux through source acceleration mechanisms only, since there could be other mechanisms, as absorption during propagation, for the reduction of the neutrino flux [1], © Published under licence by IOP Publishing Ltd.
Resumo:
We evaluate the potential for searching for isosinglet neutral heavy leptons (N), such as right-handed neutrinos, in the next generation of e+e- linear colliders, paying special attention to contributions from the reaction γe→WN initiated by photons from beamstrahlung and laser back-scattering. We find that these mechanisms are both competitive and complementary to the standard e+e-→vN annihilation process for producing neutral heavy leptons in these machines and greatly extends the search range over HERA and LEP200.
Resumo:
The effectiveness of the Anisotropic Analytical Algorithm (AAA) implemented in the Eclipse treatment planning system (TPS) was evaluated using theRadiologicalPhysicsCenteranthropomorphic lung phantom using both flattened and flattening-filter-free high energy beams. Radiation treatment plans were developed following the Radiation Therapy Oncology Group and theRadiologicalPhysicsCenterguidelines for lung treatment using Stereotactic Radiation Body Therapy. The tumor was covered such that at least 95% of Planning Target Volume (PTV) received 100% of the prescribed dose while ensuring that normal tissue constraints were followed as well. Calculated doses were exported from the Eclipse TPS and compared with the experimental data as measured using thermoluminescence detectors (TLD) and radiochromic films that were placed inside the phantom. The results demonstrate that the AAA superposition-convolution algorithm is able to calculate SBRT treatment plans with all clinically used photon beams in the range from 6 MV to 18 MV. The measured dose distribution showed a good agreement with the calculated distribution using clinically acceptable criteria of ±5% dose or 3mm distance to agreement. These results show that in a heterogeneous environment a 3D pencil beam superposition-convolution algorithms with Monte Carlo pre-calculated scatter kernels, such as AAA, are able to reliably calculate dose, accounting for increased lateral scattering due to the loss of electronic equilibrium in low density medium. The data for high energy plans (15 MV and 18 MV) showed very good tumor coverage in contrast to findings by other investigators for less sophisticated dose calculation algorithms, which demonstrated less than expected tumor doses and generally worse tumor coverage for high energy plans compared to 6MV plans. This demonstrates that the modern superposition-convolution AAA algorithm is a significant improvement over previous algorithms and is able to calculate doses accurately for SBRT treatment plans in the highly heterogeneous environment of the thorax for both lower (≤12 MV) and higher (greater than 12 MV) beam energies.