Deployment of a data analysis workflow of the ATLAS experiment on HPC systems
Contribuinte(s) |
Rinaldi, Lorenzo Carratta, Giuseppe |
---|---|
Data(s) |
15/07/2022
|
Resumo |
LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023. |
Formato |
application/pdf |
Identificador |
http://amslaurea.unibo.it/26221/1/CORCHIAMasterThesis.pdf Corchia, Federico Andrea Guillaume (2022) Deployment of a data analysis workflow of the ATLAS experiment on HPC systems. [Laurea magistrale], Università di Bologna, Corso di Studio in Physics [LM-DM270] <http://amslaurea.unibo.it/view/cds/CDS9245/> |
Idioma(s) |
en |
Publicador |
Alma Mater Studiorum - Università di Bologna |
Relação |
http://amslaurea.unibo.it/26221/ |
Direitos |
cc_by_sa4 |
Palavras-Chave | #High Energy Physics,High Performance Computing,Grid Computing,ATLAS #Physics [LM-DM270] |
Tipo |
PeerReviewed info:eu-repo/semantics/masterThesis |