998 resultados para ATLAS, particle physics, SM, ZZ, aTGC
Resumo:
Relativistic nuclear collisions data on two-particle correlations exhibit structures as function of relative azimuthal angle and rapidity. A unified description of these near-side and away-side structures is proposed for low to moderate transverse momentum. It is based on the combined effect of tubular initial conditions and hydrodynamical expansion. Contrary to expectations, the hydrodynamics solution shows that the high-energy density tubes (leftover from the initial particle interactions) give rise to particle emission in two directions and this is what leads to the various structures. This description is sensitive to some of the initial tube parameters and may provide a probe of the strong interaction. This explanation is compared with an alternative one where some triangularity in the initial conditions is assumed. A possible experimental test is suggested. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this Letter we analyze the energy distribution evolution of test particles injected in three dimensional (3D) magnetohydrodynamic (MHD) simulations of different magnetic reconnection configurations. When considering a single Sweet-Parker topology, the particles accelerate predominantly through a first-order Fermi process, as predicted in [3] and demonstrated numerically in [8]. When turbulence is included within the current sheet, the acceleration rate is highly enhanced, because reconnection becomes fast and independent of resistivity [4,11] and allows the formation of a thick volume filled with multiple simultaneously reconnecting magnetic fluxes. Charged particles trapped within this volume suffer several head-on scatterings with the contracting magnetic fluctuations, which significantly increase the acceleration rate and results in a first-order Fermi process. For comparison, we also tested acceleration in MHD turbulence, where particles suffer collisions with approaching and receding magnetic irregularities, resulting in a reduced acceleration rate. We argue that the dominant acceleration mechanism approaches a second order Fermi process in this case.
Resumo:
Diffusion is a common phenomenon in nature and generally is associated with a system trying to reach a local or a global equilibrium state, as a result of highly irregular individual particle motion. Therefore it is of fundamental importance in physics, chemistry and biology. Particle tracking in complex fluids can reveal important characteristics of its properties. In living cells, we coat the microbead with a peptide (RGD) that binds to integrin receptors at the plasma membrane, which connects to the CSK. This procedure is based on the hypothesis that the microsphere can move only if the structure where it is attached move as well. Then, the observed trajectory of microbeads is a probe of the cytoskeleton (CSK), which is governed by several factors, including thermal diffusion, pressure gradients, and molecular motors. The possibility of separating the trajectories into passive and active diffusion may give information about the viscoelasticity of the cell structure and molecular motors activity. And also we could analyze the motion via generalized Stokes-Einstein relation, avoiding the use of any active techniques. Usually a 12 to 16 Frames Per Second (FPS) system is used to track the microbeads in cell for about 5 minutes. Several factors make this FPS limitation: camera computer communication, light, computer speed for online analysis among others. Here we used a high quality camera and our own software, developed in C++ and Linux, to reach high FPS. Measurements were conducted with samples for 10£ and 20£ objectives. We performed sequentially images with different intervals, all with 2 ¹s exposure. The sequences of intervals are in milliseconds: 4 5 ms (maximum speed) 14, 25, 50 and 100 FPS. Our preliminary results highlight the difference between passive and active diffusion, since the passive diffusion is represented by a Gaussian in the distribution of displacements of the center of mass of individual beads between consecutive frames. However, the active process, or anomalous diffusion, shows as long tails in the distribution of displacements.
Resumo:
This thesis is about three major aspects of the identification of top quarks. First comes the understanding of their production mechanism, their decay channels and how to translate theoretical formulae into programs that can simulate such physical processes using Monte Carlo techniques. In particular, the author has been involved in the introduction of the POWHEG generator in the framework of the ATLAS experiment. POWHEG is now fully used as the benchmark program for the simulation of ttbar pairs production and decay, along with MC@NLO and AcerMC: this will be shown in chapter one. The second chapter illustrates the ATLAS detectors and its sub-units, such as calorimeters and muon chambers. It is very important to evaluate their efficiency in order to fully understand what happens during the passage of radiation through the detector and to use this knowledge in the calculation of final quantities such as the ttbar production cross section. The last part of this thesis concerns the evaluation of this quantity deploying the so-called "golden channel" of ttbar decays, yielding one energetic charged lepton, four particle jets and a relevant quantity of missing transverse energy due to the neutrino. The most important systematic errors arising from the various part of the calculation are studied in detail. Jet energy scale, trigger efficiency, Monte Carlo models, reconstruction algorithms and luminosity measurement are examples of what can contribute to the uncertainty about the cross-section.
Resumo:
A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
The surprising discovery of the X(3872) resonance by the Belle experiment in 2003, and subsequent confirmation by BaBar, CDF and D0, opened up a new chapter of QCD studies and puzzles. Since then, detailed experimental and theoretical studies have been performed in attempt to determine and explain the proprieties of this state. Since the end of 2009 the world’s largest and highest-energy particle accelerator, the Large Hadron Collider (LHC), started its operations at the CERN laboratories in Geneva. One of the main experiments at LHC is CMS (Compact Muon Solenoid), a general purpose detector projected to address a wide range of physical phenomena, in particular the search of the Higgs boson, the only still unconfirmed element of the Standard Model (SM) of particle interactions and, new physics beyond the SM itself. Even if CMS has been designed to study high energy events, it’s high resolution central tracker and superior muon spectrometer made it an optimal tool to study the X(3872) state. In this thesis are presented the results of a series of study on the X(3872) state performed with the CMS experiment. Already with the first year worth of data, a clear peak for the X(3872) has been identified, and the measurement of the cross section ratio with respect to the Psi(2S) has been performed. With the increased statistic collected during 2011 it has been possible to study, in bins of transverse momentum, the cross section ratio between X(3872) and Psi(2S) and separate their prompt and non-prompt component.
Resumo:
The Zero Degree Calorimeter (ZDC) of the ATLAS experiment at CERN is placed in the TAN of the LHC collider, covering the pseudorapidity region higher than 8.3. It is composed by 2 calorimeters, each one longitudinally segmented in 4 modules, located at 140 m from the IP exactly on the beam axis. The ZDC can detect neutral particles during pp collisions and it is a tool for diffractive physics. Here we present results on the forward photon energy distribution obtained using p-p collision data at sqrt{s} = 7 TeV. First the pi0 reconstruction will be used for the detector calibration with photons, then we will show results on the forward photon energy distribution in p-p collisions and the same distribution, but obtained using MC generators. Finally a comparison between data and MC will be shown.
Resumo:
In this thesis three measurements of top-antitop differential cross section at an energy in the center of mass of 7 TeV will be shown, as a function of the transverse momentum, the mass and the rapidity of the top-antitop system. The analysis has been carried over a data sample of about 5/fb recorded with the ATLAS detector. The events have been selected with a cut based approach in the "one lepton plus jets" channel, where the lepton can be either an electron or a muon. The most relevant backgrounds (multi-jet QCD and W+jets) have been extracted using data driven methods; the others (Z+ jets, diboson and single top) have been simulated with Monte Carlo techniques. The final, background-subtracted, distributions have been corrected, using unfolding methods, for the detector and selection effects. At the end, the results have been compared with the theoretical predictions. The measurements are dominated by the systematic uncertainties and show no relevant deviation from the Standard Model predictions.
Resumo:
In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Götze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of König et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.
Resumo:
This PhD thesis presents two measurements of differential production cross section of top and anti-top pairs tt ̅ decaying in a lepton+jets final state. The normalize cross section is measured as a function of the top transverse momentum and the tt ̅ mass, transverse momentum and rapidity using the full 2011 proton-proton (pp) ATLAS data taking at a center of mass energy of √s=7 TeV and corresponding to an integrated luminosity of L=4.6 〖fb〗^(-1). The cross section is also measured at the particle level as a function of the hadronic top transverse momentum for highly energetic events using the full 2012 data taking at √s=8 TeV and with L=20 〖fb〗^(-1). The measured spectra are fully corrected for detector efficiency and resolution effects and are compared to several theoretical predictions showing a quite good agreement, depending on different spectra.
Resumo:
The production of the Z boson in proton-proton collisions at the LHC serves as a standard candle at the ATLAS experiment during early data-taking. The decay of the Z into an electron-positron pair gives a clean signature in the detector that allows for calibration and performance studies. The cross-section of ~ 1 nb allows first LHC measurements of parton density functions. In this thesis, simulations of 10 TeV collisions at the ATLAS detector are studied. The challenges for an experimental measurement of the cross-section with an integrated luminositiy of 100 pb−1 are discussed. In preparation for the cross-section determination, the single-electron efficiencies are determined via a simulation based method and in a test of a data-driven ansatz. The two methods show a very good agreement and differ by ~ 3% at most. The ingredients of an inclusive and a differential Z production cross-section measurement at ATLAS are discussed and their possible contributions to systematic uncertainties are presented. For a combined sample of signal and background the expected uncertainty on the inclusive cross-section for an integrated luminosity of 100 pb−1 is determined to 1.5% (stat) +/- 4.2% (syst) +/- 10% (lumi). The possibilities for single-differential cross-section measurements in rapidity and transverse momentum of the Z boson, which are important quantities because of the impact on parton density functions and the capability to check for non-pertubative effects in pQCD, are outlined. The issues of an efficiency correction based on electron efficiencies as function of the electron’s transverse momentum and pseudorapidity are studied. A possible alternative is demonstrated by expanding the two-dimensional efficiencies with the additional dimension of the invariant mass of the two leptons of the Z decay.
Resumo:
Der Wirkungsquerschnitt der Charmoniumproduktion wurde unter Nutzung der Daten aus pp-Kollisionen bei s^{1/2}=7TeV, die im Jahr 2010 vom Atlas-Experiment am LHC aufgezeichnet wurden, gemessen. Um das notwendige Detektorverständnis zu verbessern, wurde eine Energiekalibration durchgeführt.rnrnrnUnter Nutzung von Elektronen aus Zerfällen des Charmoniums wurde die Energieskala der elektromagnetischen Kalorimeter bei niedrigen Energien untersucht. Nach Anwendung der Kalibration wurden für die Energiemessung im Vergleich mit in Monte-Carlo-Simulationen gemessenen Energien Abweichungen von weniger als 0,5% gefunden.rnrnrnMit einer integrierten Luminosität von 2,2pb^{-1} wurde eine erste Messung des inklusiven Wirkungsquerschnittes für den Prozess pp->J/psi(e^{+}e^{-})+X bei s^{1/2}=7TeV vorgenommen. Das geschah im zugänglichen Bereich für Transversalimpulse p_{T,ee}>7GeV und Rapiditäten |y_{ee}|<2,4. Es wurden differentielle Wirkungsquerschnitte für den Transversalimpuls p_{T,ee} und für die Rapidität |y_{ee}| bestimmt. Integration beider Verteilungen lieferte für den inklusiven Wirkungsquerschnitt sigma(pp-> J/psi X)BR(J/psi->e^{+}e^{-}) die Werte (85,1+/-1,9_{stat}+/-11,2_{syst}+/-2,9_{Lum})nb und (75,4+/-1,6_{stat}+/-11,9_{syst}+/-2,6_{Lum})nb, die innerhalb der Systematik kompatibel sind.rnrnrnVergleiche mit Messungen von Atlas und CMS für den Prozess pp->J/psi(mu^{+}mu^{-})+X zeigten gute Übereinstimmung. Zum Vergleich mit der Theorie wurden Vorhersagen mit verschiedenen Modellen in nächst-zu-führender und mit Anteilen in nächst-zu-nächst-zu-führender Ordnung kombiniert. Der Vergleich zeigt eine gute Übereinstimmung bei Berücksichtigung von Anteilen in nächst-zu-nächst-zu-führender Ordnung.
Resumo:
This thesis presents an analysis for the search of Supersymmetry with the ATLAS detector at the LHC. The final state with one lepton, several coloured particles and large missing transverse energy was chosen. Particular emphasis was placed on the optimization of the requirements for lepton identification. This optimization showed to be particularly useful when combining with multi-lepton selections. The systematic error associated with the higher order QCD diagrams in Monte Carlo production is given particular focus. Methods to verify and correct the energy measurement of hadronic showers are developed. Methods for the identification and removal of mismeasurements caused by the detector are found in the single muon and four jet environment are applied. A new detector simulation system is shown to provide good prospects for future fast Monte Carlo production. The analysis was performed for $35pb^{-1}$ and no significant deviation from the Standard Model is seen. Exclusion limits subchannel for minimal Supergravity. Previous limits set by Tevatron and LEP are extended.
Resumo:
Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.