957 resultados para LHC CMS
Resumo:
Heavy-ion collisions are a powerful tool to study hot and dense QCD matter, the so-called Quark Gluon Plasma (QGP). Since heavy quarks (charm and beauty) are dominantly produced in the early stages of the collision, they experience the complete evolution of the system. Measurements of electrons from heavy-flavour hadron decay is one possible way to study the interaction of these particles with the QGP. With ALICE at LHC, electrons can be identified with high efficiency and purity. A strong suppression of heavy-flavour decay electrons has been observed at high $p_{m T}$ in Pb-Pb collisions at 2.76 TeV. Measurements in p-Pb collisions are crucial to understand cold nuclear matter effects on heavy-flavour production in heavy-ion collisions. The spectrum of electrons from the decays of hadrons containing charm and beauty was measured in p-Pb collisions at $\\sqrt = 5.02$ TeV. The heavy flavour decay electrons were measured by using the Time Projection Chamber (TPC) and the Electromagnetic Calorimeter (EMCal) detectors from ALICE in the transverse-momentum range $2 < p_ < 20$ GeV/c. The measurements were done in two different data set: minimum bias collisions and data using the EMCal trigger. The non-heavy flavour electron background was removed using an invariant mass method. The results are compatible with one ($R_ \\approx$ 1) and the cold nuclear matter effects in p-Pb collisions are small for the electrons from heavy-flavour hadron decays.
Resumo:
Description based on: Dec. 1982; title from cover.
Resumo:
The proliferation of course management systems (CMS) in the last decade stimulated educators in establishing novel active e-learning practices. Only a few of these practices, however, have been systematically described and published as pedagogic patterns. The lack of formal patterns is an obstacle to the systematic reuse of beneficial active e-learning experiences. This paper aims to partially fill the void by offering a collection of active e-learning patterns that are derived from our continuous course design experience in standard CMS environments, such as Moodle and Black-board. Our technical focus is on active e-learning patterns that can boost student interest in computing-related fields and increase student enrolment in computing-related courses. Members of the international e-learning community can benefit from active e-learning patterns by applying them in the design of new CMS-based courses – in computing and other technical fields.
Resumo:
For the first time, the Z0 boson angular distribution in the center-of-momentum frame is measured in proton-proton collisions at [special characters omitted] = 7 TeV at the CERN LHC. The data sample, recorded with the CMS detector, corresponds to an integrated luminosity of approximately 36 pb–1 . Events in which there is a Z0 and at least one jet, with a jet transverse momentum threshold of 20 GeV and absolute jet rapidity less than 2.4, are selected for the analysis. Only the Z0's muon decay channel is studied. Within experimental and theoretical uncertainties, the measured angular distribution is in agreement with next-to-leading order perturbative QCD predictions.
Resumo:
L’obiettivo di tutto il mio lavoro è stato quello di misurare le sezioni d’urto di produzione dei bosoni deboli W ± e Z nei loro decadimenti leptonici (e, μ) coi dati raccolti dal rivelatore ATLAS a LHC con un’energia del centro di massa di √s = 13 TeV relativi all’estate 2015. Gli eventi selezionati sono gli stessi di quelli del recente articolo della Collaborazione ATLAS sullo stesso argomento, in modo anche da poter operare un confronto tra i risultati ottenuti. Confronto peraltro necessario, poichè i risultati sono stati ottenuti con due metodologie differenti: tradizionale (classica) per l’articolo, bayesiana in questa tesi. L’approccio bayesiano permette di combinare i vari canali e di trattare gli effetti sistematici in modo del tutto naturale. I risultati ottenuti sono in ottimo accordo con le predizioni dello Standard Model e con quelli pubblicati da ATLAS.
Resumo:
This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.
Resumo:
Neste trabalho de disserta¸c˜ao, investigamos os efeitos nucleares em processos de produ¸c˜ao de quarkonium no Relativistic Heavy Ion Collider (RHIC) e no Large Hadron Collider (LHC). Para tanto, consideramos o Modelo de Evapora¸c˜ao de Cor (CEM), baseado em processos partˆonicos calculados mediante a QCD perturbativa e em intera¸c˜oes n˜ao perturbativas via troca de gl´uons suaves para a forma¸c˜ao do quarkonium. Supress˜ao de quarkonium ´e um dos sinais de forma¸c˜ao do assim chamado Plasma de Quarks e Gl´uons (QGP) em colis˜oes ultrarelativ´ısticas de ´ıons pesados. No entanto, a supress˜ao n˜ao ´e somente causada em colis˜oes n´ucleo-n´ucleo (AA) devido `a forma¸c˜ao do QGP. De fato, a supress˜ao de quarkonium tamb´em foi observada em colis˜oes pr´oton-n´ucleo (pA). A fim de separar os efeitos da mat´eria quente (devidos ao QGP) e fria (efeitos n˜ao devidos ao QGP), pode-se olhar primeiro para colis˜oes pA, onde somente efeitos de mat´eria fria desempenham um papel fundamental, e depois aplicar esses efeitos em colis˜oes AA, uma vez que parte da supress˜ao ´e devido a efeitos de mat´eria fria. No regime de altas energias, a produ¸c˜ao do quarkonium ´e fortemente dependente da distribui¸c˜ao de gl´uons nuclear, o que viabiliza uma oportunidade ´unica de estudar o comportamento de pequeno x dos gl´uons dentro do n´ucleo e, consequentemente, restringir os efeitos nucleares. Estudamos os processos nucleares utilizando distintas parametriza¸c˜oes para as distribui¸c˜oes partˆonicas nucleares. Calculamos a raz˜ao nuclear para processos pA e AA em fun¸c˜ao da vari´avel rapidez para a produ¸c˜ao de quarkonium, o que permite estimar os efeitos nucleares. Al´em disso, apresentamos uma compara¸c˜ao com os dados do RHIC para a produ¸c˜ao do m´eson J/Ψ em colis˜oes pA, demonstrando que a an´alise deste observ´avel ´e uma quest˜ao em aberto na literatura. Adicionalmente, estimamos a produ¸c˜ao de quarks pesados e quarkonium na etapa inicial e durante a fase termal de uma colis˜ao ultrarelativ´ıstica de ´ıons pesados. O objetivo deste estudo ´e estimar as distintas contribui¸c˜oes para a produ¸c˜ao e de alguns efeitos do meio nuclear.
Resumo:
Present the measurement of a rare Standard Model processes, pp →W±γγ for the leptonic decays of the W±. The measurement is made with 19.4 fb−1 of 8 TeV data collected in 2012 by the CMS experiment. The measured cross section is consistent with the Standard Model prediction and has a significance of 2.9σ. Limits are placed on dimension-8 Effective Field Theories of anomalous Quartic Gauge Couplings. The analysis has particularly sensitivity to the fT,0 coupling and a 95% confidence limit is placed at −35.9 < fT,0/Λ4< 36.7 TeV−4. Studies of the pp →Zγγ process are also presented. The Zγγ signal is in strict agreement with the Standard Model and has a significance of 5.9σ.
Resumo:
Résumé : Par l’adoption des Projets de loi 33 et 34, en 2006 et 2009 respectivement, le gouvernement du Québec a créé de nouvelles organisations privées dispensatrices de soins spécialisés, soient les centres médicaux spécialisés. Il a de ce fait encadré leur pratique, notamment dans l’objectif d’assurer un niveau de qualité et de sécurité satisfaisant des soins qui y sont dispensés. L’auteure analyse les différents mécanismes existants pour assurer la qualité et la sécurité des soins offerts en centres médicaux spécialisés, afin de constater si l’objectif recherché par le législateur est rencontré. Ainsi, elle expose les mécanismes spécifiques prévus dans la Loi sur les services de santé et services sociaux applicables aux centres médicaux spécialisés qui jouent un rôle quant au maintien de la qualité et de la sécurité des services, de même que des mécanismes indirects ayant une incidence sur ce plan, tels que la motivation économique et les recours en responsabilité. Ensuite, elle s’attarde aux processus issus de la règlementation professionnelle. Elle arrive à la conclusion que deux mécanismes sont manquants pour rencontrer l’objectif visé par le législateur et propose, à ce titre, des pistes de solution.
Resumo:
Since it has been found that the MadGraph Monte Carlo generator offers superior flavour-matching capability as compared to Alpgen, the suitability of MadGraph for the generation of ttb¯ ¯b events is explored, with a view to simulating this background in searches for the Standard Model Higgs production and decay process ttH, H ¯ → b ¯b. Comparisons are performed between the output of MadGraph and that of Alpgen, showing that satisfactory agreement in their predictions can be obtained with the appropriate generator settings. A search for the Standard Model Higgs boson, produced in association with the top quark and decaying into a b ¯b pair, using 20.3 fb−1 of 8 TeV collision data collected in 2012 by the ATLAS experiment at CERN’s Large Hadron Collider, is presented. The GlaNtp analysis framework, together with the RooFit package and associated software, are used to obtain an expected 95% confidence-level limit of 4.2 +4.1 −2.0 times the Standard Model expectation, and the corresponding observed limit is found to be 5.9; this is within experimental uncertainty of the published result of the analysis performed by the ATLAS collaboration. A search for a heavy charged Higgs boson of mass mH± in the range 200 ≤ mH± /GeV ≤ 600, where the Higgs mediates the five-flavour beyond-theStandard-Model physics process gb → tH± → ttb, with one top quark decaying leptonically and the other decaying hadronically, is presented, using the 20.3 fb−1 8 TeV ATLAS data set. Upper limits on the product of the production cross-section and the branching ratio of the H± boson are computed for six mass points, and these are found to be compatible within experimental uncertainty with those obtained by the corresponding published ATLAS analysis.
Resumo:
Crossing the Franco-Swiss border, the Large Hadron Collider (LHC), designed to collide 7 TeV proton beams, is the world's largest and most powerful particle accelerator the operation of which was originally intended to commence in 2008. Unfortunately, due to an interconnect discontinuity in one of the main dipole circuit's 13 kA superconducting busbars, a catastrophic quench event occurred during initial magnet training, causing significant physical system damage. Furthermore, investigation into the cause found that such discontinuities were not only present in the circuit in question, but throughout the entire LHC. This prevented further magnet training and ultimately resulted in the maximum sustainable beam energy being limited to approximately half that of the design nominal, 3.5-4 TeV, for the first three years of operation (Run 1, 2009-2012) and a major consolidation campaign being scheduled for the first long shutdown (LS 1, 2012-2014). Throughout Run 1, a series of studies attempted to predict the amount of post-installation training quenches still required to qualify each circuit to nominal-energy current levels. With predictions in excess of 80 quenches (each having a recovery time of 8-12+ hours) just to achieve 6.5 TeV and close to 1000 quenches for 7 TeV, it was decided that for Run 2, all systems be at least qualified for 6.5 TeV operation. However, even with all interconnect discontinuities scheduled to be repaired during LS 1, numerous other concerns regarding circuit stability arose. In particular, observations of an erratic behaviour of magnet bypass diodes and the degradation of other potentially weak busbar sections, as well as observations of seemingly random millisecond spikes in beam losses, known as unidentified falling object (UFO) events, which, if persist at 6.5 TeV, may eventually deposit sufficient energy to quench adjacent magnets. In light of the above, the thesis hypothesis states that, even with the observed issues, the LHC main dipole circuits can safely support and sustain near-nominal proton beam energies of at least 6.5 TeV. Research into minimising the risk of magnet training led to the development and implementation of a new qualification method, capable of providing conclusive evidence that all aspects of all circuits, other than the magnets and their internal joints, can safely withstand a quench event at near-nominal current levels, allowing for magnet training to be carried out both systematically and without risk. This method has become known as the Copper Stabiliser Continuity Measurement (CSCM). Results were a success, with all circuits eventually being subject to a full current decay from 6.5 TeV equivalent current levels, with no measurable damage occurring. Research into UFO events led to the development of a numerical model capable of simulating typical UFO events, reproducing entire Run 1 measured event data sets and extrapolating to 6.5 TeV, predicting the likelihood of UFO-induced magnet quenches. Results provided interesting insights into the involved phenomena as well as confirming the possibility of UFO-induced magnet quenches. The model was also capable of predicting that such events, if left unaccounted for, are likely to be commonplace or not, resulting in significant long-term issues for 6.5+ TeV operation. Addressing the thesis hypothesis, the following written works detail the development and results of all CSCM qualification tests and subsequent magnet training as well as the development and simulation results of both 4 TeV and 6.5 TeV UFO event modelling. The thesis concludes, post-LS 1, with the LHC successfully sustaining 6.5 TeV proton beams, but with UFO events, as predicted, resulting in otherwise uninitiated magnet quenches and being at the forefront of system availability issues.
Resumo:
The scalar sector of the simplest version of the 3-3-1 electroweak model is constructed with three Higgs triplets only. We show that a relation involving two of the constants of the model, two vacuum expectation values of the neutral scalars, and the mass of the doubly charged Higgs boson leads to important information concerning the signals of this scalar particle.