861 resultados para LHC, CMS, Grid Computing, Cloud Comuting, Top Physics
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Al Large Hadron Collider (LHC) ogni anno di acquisizione dati vengono raccolti più di 30 petabyte di dati dalle collisioni. Per processare questi dati è necessario produrre un grande volume di eventi simulati attraverso tecniche Monte Carlo. Inoltre l'analisi fisica richiede accesso giornaliero a formati di dati derivati per centinaia di utenti. La Worldwide LHC Computing GRID (WLCG) è una collaborazione interazionale di scienziati e centri di calcolo che ha affrontato le sfide tecnologiche di LHC, rendendone possibile il programma scientifico. Con il prosieguo dell'acquisizione dati e la recente approvazione di progetti ambiziosi come l'High-Luminosity LHC, si raggiungerà presto il limite delle attuali capacità di calcolo. Una delle chiavi per superare queste sfide nel prossimo decennio, anche alla luce delle ristrettezze economiche dalle varie funding agency nazionali, consiste nell'ottimizzare efficientemente l'uso delle risorse di calcolo a disposizione. Il lavoro mira a sviluppare e valutare strumenti per migliorare la comprensione di come vengono monitorati i dati sia di produzione che di analisi in CMS. Per questa ragione il lavoro è comprensivo di due parti. La prima, per quanto riguarda l'analisi distribuita, consiste nello sviluppo di uno strumento che consenta di analizzare velocemente i log file derivanti dalle sottomissioni di job terminati per consentire all'utente, alla sottomissione successiva, di sfruttare meglio le risorse di calcolo. La seconda parte, che riguarda il monitoring di jobs sia di produzione che di analisi, sfrutta tecnologie nel campo dei Big Data per un servizio di monitoring più efficiente e flessibile. Un aspetto degno di nota di tali miglioramenti è la possibilità di evitare un'elevato livello di aggregazione dei dati già in uno stadio iniziale, nonché di raccogliere dati di monitoring con una granularità elevata che tuttavia consenta riprocessamento successivo e aggregazione “on-demand”.
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Various differential cross-sections are measured in top-quark pair (tt¯) events produced in proton--proton collisions at a centre-of-mass energy of s√=7 TeV at the LHC with the ATLAS detector. These differential cross-sections are presented in a data set corresponding to an integrated luminosity of 4.6 fb−1. The differential cross-sections are presented in terms of kinematic variables of a top-quark proxy referred to as the pseudo-top-quark whose dependence on theoretical models is minimal. The pseudo-top-quark can be defined in terms of either reconstructed detector objects or stable particles in an analogous way. The measurements are performed on tt¯ events in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of them tagged as originating from a b-quark. The hadronic and leptonic pseudo-top-quarks are defined via the leptonic or hadronic decay mode of the W boson produced by the top-quark decay in events with a single charged lepton.The cross-section is measured as a function of the transverse momentum and rapidity of both the hadronic and leptonic pseudo-top-quark as well as the transverse momentum, rapidity and invariant mass of the pseudo-top-quark pair system. The measurements are corrected for detector effects and are presented within a kinematic range that closely matches the detector acceptance. Differential cross-section measurements of the pseudo-top-quark variables are compared with several Monte Carlo models that implement next-to-leading order or leading-order multi-leg matrix-element calculations.
Resumo:
This Letter presents a search at the LHC for s-channel single top-quark production in proton-proton collisions at a centre-of-mass energy of 8 TeV. The analyzed data set was recorded by the ATLAS detector and corresponds to an integrated luminosity of 20.3 fb−1. Selected events contain one charged lepton, large missing transverse momentum and exactly two b-tagged jets. A multivariate event classifier based on boosted decision trees is developed to discriminate s-channel single top-quark events from the main background contributions. The signal extraction is based on a binned maximum-likelihood fit of the output classifier distribution. The analysis leads to an upper limit on the s-channel single top-quark production cross-section of 14.6 pb at the 95% confidence level. The fit gives a cross-section of σs=5.0±4.3 pb, consistent with the Standard Model expectation.
Resumo:
A measurement of the top--antitop (tt¯) charge asymmetry is presented using data corresponding to an integrated luminosity of 4.6 fb−1 of LHC pp collisions at a centre-of-mass energy of 7 TeV collected by the ATLAS detector. Events with two charged leptons, at least two jets and large missing transverse momentum are selected. Two observables are studied: AℓℓC based on the identified charged leptons, and Att¯C, based on the reconstructed tt¯ final state. The asymmetries are measured to be AℓℓC=0.024±0.015 (stat.)±0.009 (syst.), Att¯C=0.021±0.025 (stat.)±0.017 (syst.). The measured values are in agreement with the Standard Model predictions.
Resumo:
The mass of the top quark is measured in a data set corresponding to 4.6 fb−1 of proton--proton collisions with centre-of-mass energy s√=7 TeV collected by the ATLAS detector at the LHC. Events consistent with hadronic decays of top--antitop quark pairs with at least six jets in the final state are selected. The substantial background from multijet production is modelled with data-driven methods that utilise the number of identified b-quark jets and the transverse momentum of the sixth leading jet, which have minimal correlation. The top-quark mass is obtained from template fits to the ratio of three-jet to dijet mass. The three-jet mass is calculated from the three jets of a top-quark decay. Using these three jets the dijet mass is obtained from the two jets of the W boson decay. The top-quark mass obtained from this fit is thus less sensitive to the uncertainty in the energy measurement of the jets. A binned likelihood fit yields a top-quark mass of mt = 175.1 ± 1.4 (stat.) ± 1.2 (syst.) GeV.
Resumo:
The normalized differential cross section for top-quark pair production in association with at least one jet is studied as a function of the inverse of the invariant mass of the tt¯+1-jet system. This distribution can be used for a precise determination of the top-quark mass since gluon radiation depends on the mass of the quarks. The experimental analysis is based on proton--proton collision data collected by the ATLAS detector at the LHC with a centre-of-mass energy of 7 TeV corresponding to an integrated luminosity of 4.6 fb−1. The selected events were identified using the lepton+jets top-quark-pair decay channel, where lepton refers to either an electron or a muon. The observed distribution is compared to a theoretical prediction at next-to-leading-order accuracy in quantum chromodynamics using the pole-mass scheme. With this method, the measured value of the top-quark pole mass, mpolet, is: mpolet =173.7 ± 1.5 (stat.) ± 1.4 (syst.) +1.0−0.5 (theory) GeV. This result represents the most precise measurement of the top-quark pole mass to date.
Resumo:
Despite the huge increase in processor and interprocessor network performace, many computational problems remain unsolved due to lack of some critical resources such as floating point sustained performance, memory bandwidth, etc... Examples of these problems are found in areas of climate research, biology, astrophysics, high energy physics (montecarlo simulations) and artificial intelligence, among others. For some of these problems, computing resources of a single supercomputing facility can be 1 or 2 orders of magnitude apart from the resources needed to solve some them. Supercomputer centers have to face an increasing demand on processing performance, with the direct consequence of an increasing number of processors and systems, resulting in a more difficult administration of HPC resources and the need for more physical space, higher electrical power consumption and improved air conditioning, among other problems. Some of the previous problems can´t be easily solved, so grid computing, intended as a technology enabling the addition and consolidation of computing power, can help in solving large scale supercomputing problems. In this document, we describe how 2 supercomputing facilities in Spain joined their resources to solve a problem of this kind. The objectives of this experience were, among others, to demonstrate that such a cooperation can enable the solution of bigger dimension problems and to measure the efficiency that could be achieved. In this document we show some preliminary results of this experience and to what extend these objectives were achieved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)