957 resultados para cms lhc 7TeV
Resumo:
This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.
Resumo:
Neste trabalho de disserta¸c˜ao, investigamos os efeitos nucleares em processos de produ¸c˜ao de quarkonium no Relativistic Heavy Ion Collider (RHIC) e no Large Hadron Collider (LHC). Para tanto, consideramos o Modelo de Evapora¸c˜ao de Cor (CEM), baseado em processos partˆonicos calculados mediante a QCD perturbativa e em intera¸c˜oes n˜ao perturbativas via troca de gl´uons suaves para a forma¸c˜ao do quarkonium. Supress˜ao de quarkonium ´e um dos sinais de forma¸c˜ao do assim chamado Plasma de Quarks e Gl´uons (QGP) em colis˜oes ultrarelativ´ısticas de ´ıons pesados. No entanto, a supress˜ao n˜ao ´e somente causada em colis˜oes n´ucleo-n´ucleo (AA) devido `a forma¸c˜ao do QGP. De fato, a supress˜ao de quarkonium tamb´em foi observada em colis˜oes pr´oton-n´ucleo (pA). A fim de separar os efeitos da mat´eria quente (devidos ao QGP) e fria (efeitos n˜ao devidos ao QGP), pode-se olhar primeiro para colis˜oes pA, onde somente efeitos de mat´eria fria desempenham um papel fundamental, e depois aplicar esses efeitos em colis˜oes AA, uma vez que parte da supress˜ao ´e devido a efeitos de mat´eria fria. No regime de altas energias, a produ¸c˜ao do quarkonium ´e fortemente dependente da distribui¸c˜ao de gl´uons nuclear, o que viabiliza uma oportunidade ´unica de estudar o comportamento de pequeno x dos gl´uons dentro do n´ucleo e, consequentemente, restringir os efeitos nucleares. Estudamos os processos nucleares utilizando distintas parametriza¸c˜oes para as distribui¸c˜oes partˆonicas nucleares. Calculamos a raz˜ao nuclear para processos pA e AA em fun¸c˜ao da vari´avel rapidez para a produ¸c˜ao de quarkonium, o que permite estimar os efeitos nucleares. Al´em disso, apresentamos uma compara¸c˜ao com os dados do RHIC para a produ¸c˜ao do m´eson J/Ψ em colis˜oes pA, demonstrando que a an´alise deste observ´avel ´e uma quest˜ao em aberto na literatura. Adicionalmente, estimamos a produ¸c˜ao de quarks pesados e quarkonium na etapa inicial e durante a fase termal de uma colis˜ao ultrarelativ´ıstica de ´ıons pesados. O objetivo deste estudo ´e estimar as distintas contribui¸c˜oes para a produ¸c˜ao e de alguns efeitos do meio nuclear.
Resumo:
Present the measurement of a rare Standard Model processes, pp →W±γγ for the leptonic decays of the W±. The measurement is made with 19.4 fb−1 of 8 TeV data collected in 2012 by the CMS experiment. The measured cross section is consistent with the Standard Model prediction and has a significance of 2.9σ. Limits are placed on dimension-8 Effective Field Theories of anomalous Quartic Gauge Couplings. The analysis has particularly sensitivity to the fT,0 coupling and a 95% confidence limit is placed at −35.9 < fT,0/Λ4< 36.7 TeV−4. Studies of the pp →Zγγ process are also presented. The Zγγ signal is in strict agreement with the Standard Model and has a significance of 5.9σ.
Resumo:
Résumé : Par l’adoption des Projets de loi 33 et 34, en 2006 et 2009 respectivement, le gouvernement du Québec a créé de nouvelles organisations privées dispensatrices de soins spécialisés, soient les centres médicaux spécialisés. Il a de ce fait encadré leur pratique, notamment dans l’objectif d’assurer un niveau de qualité et de sécurité satisfaisant des soins qui y sont dispensés. L’auteure analyse les différents mécanismes existants pour assurer la qualité et la sécurité des soins offerts en centres médicaux spécialisés, afin de constater si l’objectif recherché par le législateur est rencontré. Ainsi, elle expose les mécanismes spécifiques prévus dans la Loi sur les services de santé et services sociaux applicables aux centres médicaux spécialisés qui jouent un rôle quant au maintien de la qualité et de la sécurité des services, de même que des mécanismes indirects ayant une incidence sur ce plan, tels que la motivation économique et les recours en responsabilité. Ensuite, elle s’attarde aux processus issus de la règlementation professionnelle. Elle arrive à la conclusion que deux mécanismes sont manquants pour rencontrer l’objectif visé par le législateur et propose, à ce titre, des pistes de solution.
Resumo:
Since it has been found that the MadGraph Monte Carlo generator offers superior flavour-matching capability as compared to Alpgen, the suitability of MadGraph for the generation of ttb¯ ¯b events is explored, with a view to simulating this background in searches for the Standard Model Higgs production and decay process ttH, H ¯ → b ¯b. Comparisons are performed between the output of MadGraph and that of Alpgen, showing that satisfactory agreement in their predictions can be obtained with the appropriate generator settings. A search for the Standard Model Higgs boson, produced in association with the top quark and decaying into a b ¯b pair, using 20.3 fb−1 of 8 TeV collision data collected in 2012 by the ATLAS experiment at CERN’s Large Hadron Collider, is presented. The GlaNtp analysis framework, together with the RooFit package and associated software, are used to obtain an expected 95% confidence-level limit of 4.2 +4.1 −2.0 times the Standard Model expectation, and the corresponding observed limit is found to be 5.9; this is within experimental uncertainty of the published result of the analysis performed by the ATLAS collaboration. A search for a heavy charged Higgs boson of mass mH± in the range 200 ≤ mH± /GeV ≤ 600, where the Higgs mediates the five-flavour beyond-theStandard-Model physics process gb → tH± → ttb, with one top quark decaying leptonically and the other decaying hadronically, is presented, using the 20.3 fb−1 8 TeV ATLAS data set. Upper limits on the product of the production cross-section and the branching ratio of the H± boson are computed for six mass points, and these are found to be compatible within experimental uncertainty with those obtained by the corresponding published ATLAS analysis.
Resumo:
Crossing the Franco-Swiss border, the Large Hadron Collider (LHC), designed to collide 7 TeV proton beams, is the world's largest and most powerful particle accelerator the operation of which was originally intended to commence in 2008. Unfortunately, due to an interconnect discontinuity in one of the main dipole circuit's 13 kA superconducting busbars, a catastrophic quench event occurred during initial magnet training, causing significant physical system damage. Furthermore, investigation into the cause found that such discontinuities were not only present in the circuit in question, but throughout the entire LHC. This prevented further magnet training and ultimately resulted in the maximum sustainable beam energy being limited to approximately half that of the design nominal, 3.5-4 TeV, for the first three years of operation (Run 1, 2009-2012) and a major consolidation campaign being scheduled for the first long shutdown (LS 1, 2012-2014). Throughout Run 1, a series of studies attempted to predict the amount of post-installation training quenches still required to qualify each circuit to nominal-energy current levels. With predictions in excess of 80 quenches (each having a recovery time of 8-12+ hours) just to achieve 6.5 TeV and close to 1000 quenches for 7 TeV, it was decided that for Run 2, all systems be at least qualified for 6.5 TeV operation. However, even with all interconnect discontinuities scheduled to be repaired during LS 1, numerous other concerns regarding circuit stability arose. In particular, observations of an erratic behaviour of magnet bypass diodes and the degradation of other potentially weak busbar sections, as well as observations of seemingly random millisecond spikes in beam losses, known as unidentified falling object (UFO) events, which, if persist at 6.5 TeV, may eventually deposit sufficient energy to quench adjacent magnets. In light of the above, the thesis hypothesis states that, even with the observed issues, the LHC main dipole circuits can safely support and sustain near-nominal proton beam energies of at least 6.5 TeV. Research into minimising the risk of magnet training led to the development and implementation of a new qualification method, capable of providing conclusive evidence that all aspects of all circuits, other than the magnets and their internal joints, can safely withstand a quench event at near-nominal current levels, allowing for magnet training to be carried out both systematically and without risk. This method has become known as the Copper Stabiliser Continuity Measurement (CSCM). Results were a success, with all circuits eventually being subject to a full current decay from 6.5 TeV equivalent current levels, with no measurable damage occurring. Research into UFO events led to the development of a numerical model capable of simulating typical UFO events, reproducing entire Run 1 measured event data sets and extrapolating to 6.5 TeV, predicting the likelihood of UFO-induced magnet quenches. Results provided interesting insights into the involved phenomena as well as confirming the possibility of UFO-induced magnet quenches. The model was also capable of predicting that such events, if left unaccounted for, are likely to be commonplace or not, resulting in significant long-term issues for 6.5+ TeV operation. Addressing the thesis hypothesis, the following written works detail the development and results of all CSCM qualification tests and subsequent magnet training as well as the development and simulation results of both 4 TeV and 6.5 TeV UFO event modelling. The thesis concludes, post-LS 1, with the LHC successfully sustaining 6.5 TeV proton beams, but with UFO events, as predicted, resulting in otherwise uninitiated magnet quenches and being at the forefront of system availability issues.
Resumo:
The scalar sector of the simplest version of the 3-3-1 electroweak model is constructed with three Higgs triplets only. We show that a relation involving two of the constants of the model, two vacuum expectation values of the neutral scalars, and the mass of the doubly charged Higgs boson leads to important information concerning the signals of this scalar particle.
Resumo:
Using a peculiar version of the SU(3)(L) circle times U(1)(N) electroweak model, we investigate the production of doubly charged Higgs boson at the Large Hadron Collider. Our results include branching ratio calculations for the doubly charged Higgs and for one of the neutral scalar bosons of the model. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The Complex singlet extension of the Standard Model (CxSM) is the simplest extension that provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we propose a set of benchmark points for the real and complex singlet extensions to be tested at the LHC run 2. They have been chosen such that the discovery prospects of the involved scalars are maximised and they fulfil the dark matter constraints. Furthermore, for some of the points the theory is stable up to high energy scales. For the computation of the decay widths and branching ratios we developed the Fortran code sHDECAY, which is based on the implementation of the real and complex singlet extensions of the SM in HDECAY.
Resumo:
SCOPUS: ar.j
Resumo:
At the HL-LHC, proton bunches will cross each other every 25. ns, producing an average of 140 pp-collisions per bunch crossing. To operate in such an environment, the CMS experiment will need a L1 hardware trigger able to identify interesting events within a latency of 12.5. μs. The future L1 trigger will make use also of data coming from the silicon tracker to control the trigger rate. The architecture that will be used in future to process tracker data is still under discussion. One interesting proposal makes use of the Time Multiplexed Trigger concept, already implemented in the CMS calorimeter trigger for the Phase I trigger upgrade. The proposed track finding algorithm is based on the Hough Transform method. The algorithm has been tested using simulated pp-collision data. Results show a very good tracking efficiency. The algorithm will be demonstrated in hardware in the coming months using the MP7, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1. Tb/s.
Resumo:
La sezione d'urto differenziale di produzione di coppie t/t viene misurata utilizzando dati raccolti nel 2012 dall'esperimento CMS in collisioni protone-protone con un'energia nel centro di massa di 8 TeV. La misura viene effettuata su eventi che superano una serie di selezioni applicate al fine di migliorare il rapporto segnale/rumore. In particolare, facendo riferimento al canale all-hadronic, viene richiesta la presenza di almeno sei jet nello stato finale del decadimento della coppia t/t di cui almeno due con quark b. Ottenuto un campione di eventi sufficientemente puro, si può procedere con un fit cinematico, che consiste nel minimizzare una funzione chi quadro in cui si considera tra i parametri liberi la massa invariante associata ai quark top; le cui distribuzioni, richiedendo che il chi quadro sia <10, vengono ricostruite per gli eventi candidati, per il segnale, ottenuto mediante eventi simulati, e per il fondo, modellizzato negando la presenza di jet con b-tag nello stato finale del decadimento della coppia t/t. Con le suddette distribuzioni, attraverso un fit di verosimiglianza, si deducono le frazioni di segnale e di fondo presenti negli eventi. È dunque possibile riempire un istogramma di confronto tra gli eventi candidati e la somma di segnale+fondo per la massa invariante associata ai quark top. Considerando l'intervallo di valori nel quale il rapporto segnale/rumore è migliore si possono ottenere istogrammi di confronto simili al precedente anche per la quantità di moto trasversa del quark top e la massa invariante e la rapidità del sistema t/t. Infine, la sezione d'urto differenziale è misurata attraverso le distribuzioni di tali variabili dopo aver sottratto negli eventi il fondo.
Resumo:
The t/t production cross section is measured with the CMS detector in the all-jets channel in $pp$ collisions at the centre-of-mass energy of 13 TeV. The analysis is based on the study of t/t events in the boosted topology, namely events in which decay products of the quark top have a high Lorentz boost and are thus reconstructed in the detector as a single, wide jet. The data sample used in this analysis corresponds to an integrated luminosity of 2.53 fb-1. The inclusive cross section is found to be sigma(t/t) = 727 +- 46 (stat.) +115-112 (sys.) +- 20~(lumi.) pb, a value which is consistent with the theoretical predictions. The differential, detector-level cross section is measured as a function of the transverse momentum of the leading jet and compared to the QCD theoretical predictions. Finally, the differential, parton-level cross section is reported, measured as a function of the transverse momentum of the leading parton, extrapolated to the full phase space and compared to the QCD predictions.