991 resultados para Hadron physics
Resumo:
A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.
Resumo:
In dieser Arbeit werden die QCD-Strahlungskorrekturen in erster Ordnung der starken Kopplungskonstanten für verschiedene Polarisationsobservablen zu semileptonischen Zerfällen eines bottom-Quarks in ein charm-Quark und ein Leptonpaar berechnet. Im ersten Teil wird der Zerfall eines unpolarisierten b-Quarks in ein polarisiertes c-Quark sowie ein geladenes Lepton und ein Antineutrino im Ruhesystem des b-Quarks analysiert. Es werden die Strahlungskorrekturen für den unpolarisierten und den polarisierten Beitrag zur differentiellen Zerfallsrate nach der Energie des c-Quarks berechnet, wobei das geladene Lepton als leicht angesehen und seine Masse daher vernachlässigt wird. Die inklusive differentielle Rate wird durch zwei Strukturfunktionen in analytischer Form dargestellt. Anschließend werden die Strukturfunktionen und die Polarisation des c-Quarks numerisch ausgewertet. Nach der Einführung der Helizitäts-Projektoren befaßt sich der zweite Teil mit dem kaskadenartigen Zerfall eines polarisierten b-Quarks in ein unpolarisiertes c-Quark und ein virtuelles W-Boson, welches weiter in ein Paar leichter Leptonen zerfällt. Es werden die inklusiven Strahlungskorrekturen zu drei unpolarisierten und fünf polarisierten Helizitäts-Strukturfunktionen in analytischer Form berechnet, welche die Winkelverteilung für die differentielle Zerfallsrate nach dem Viererimpulsquadrat des W-Bosons beschreiben. Die Strukturfunktionen enthalten die Informationen sowohl über die polare Winkelverteilung zwischen dem Spinvektor des b-Quarks und dem Impulsvektor des W-Bosons als auch über die räumliche Winkelverteilung zwischen den Impulsen des W-Bosons und des Leptonpaars. Der Impuls und der Spinvektor des b-Quarks sowie der Impuls des W-Bosons werden im b-Ruhesystem analysiert, während die Impulse des Leptonpaars im W-Ruhesystem ausgewertet werden. Zusätzlich zu den genannten Strukturfunktionen werden noch die unpolarisierte und die polarisierte skalare Strukturfunktion angegeben, die in Anwendungen bei hadronischen Zerfällen eine Rolle spielen. Anschließend folgt eine numerische Auswertung aller berechneten Strukturfunktionen. Im dritten Teil werden die nichtperturbativen HQET-Korrekturen zu inklusiven semileptonischen Zerfällen schwerer Hadronen diskutiert, welche ein b-Quark enthalten. Sie beschreiben hadronische Korrekturen, die durch die feste Bindung des b-Quarks in Hadronen hervorgerufen werden. Es werden insgesamt fünf unpolarisierte und neun polarisierte Helizitäts-Strukturfunktionen in analytischer Form angegeben, die auch eine endliche Masse und den Spin des geladenen Leptons berücksichtigen. Die Strukturfunktionen werden sowohl in differentieller Form in Abhängigkeit des quadrierten Viererimpulses des W-Bosons als auch in integrierter Form präsentiert. Zum Schluß werden die zuvor erhaltenen Resultate auf die semi-inklusiven hadronischen Zerfälle eines polarisierten Lambda_b-Baryons oder eines B-Mesons in ein D_s- oder ein D_s^*-Meson unter Berücksichtigung der D_s^*-Polarisation angewandt. Für die zugehörigen Winkelverteilungen werden die inklusiven QCD- und die nichtperturbativen HQET-Korrekturen zu den Helizitäts-Strukturfunktionen in analytischer Form angegeben und anschließend numerisch ausgewertet.
Resumo:
The present state of the theoretical predictions for the hadronic heavy hadron production is not quite satisfactory. The full next-to-leading order (NLO) ${cal O} (alpha_s^3)$ corrections to the hadroproduction of heavy quarks have raised the leading order (LO) ${cal O} (alpha_s^2)$ estimates but the NLO predictions are still slightly below the experimental numbers. Moreover, the theoretical NLO predictions suffer from the usual large uncertainty resulting from the freedom in the choice of renormalization and factorization scales of perturbative QCD.In this light there are hopes that a next-to-next-to-leading order (NNLO) ${cal O} (alpha_s^4)$ calculation will bring theoretical predictions even closer to the experimental data. Also, the dependence on the factorization and renormalization scales of the physical process is expected to be greatly reduced at NNLO. This would reduce the theoretical uncertainty and therefore make the comparison between theory and experiment much more significant. In this thesis I have concentrated on that part of NNLO corrections for hadronic heavy quark production where one-loop integrals contribute in the form of a loop-by-loop product. In the first part of the thesis I use dimensional regularization to calculate the ${cal O}(ep^2)$ expansion of scalar one-loop one-, two-, three- and four-point integrals. The Laurent series of the scalar integrals is needed as an input for the calculation of the one-loop matrix elements for the loop-by-loop contributions. Since each factor of the loop-by-loop product has negative powers of the dimensional regularization parameter $ep$ up to ${cal O}(ep^{-2})$, the Laurent series of the scalar integrals has to be calculated up to ${cal O}(ep^2)$. The negative powers of $ep$ are a consequence of ultraviolet and infrared/collinear (or mass ) divergences. Among the scalar integrals the four-point integrals are the most complicated. The ${cal O}(ep^2)$ expansion of the three- and four-point integrals contains in general classical polylogarithms up to ${rm Li}_4$ and $L$-functions related to multiple polylogarithms of maximal weight and depth four. All results for the scalar integrals are also available in electronic form. In the second part of the thesis I discuss the properties of the classical polylogarithms. I present the algorithms which allow one to reduce the number of the polylogarithms in an expression. I derive identities for the $L$-functions which have been intensively used in order to reduce the length of the final results for the scalar integrals. I also discuss the properties of multiple polylogarithms. I derive identities to express the $L$-functions in terms of multiple polylogarithms. In the third part I investigate the numerical efficiency of the results for the scalar integrals. The dependence of the evaluation time on the relative error is discussed. In the forth part of the thesis I present the larger part of the ${cal O}(ep^2)$ results on one-loop matrix elements in heavy flavor hadroproduction containing the full spin information. The ${cal O}(ep^2)$ terms arise as a combination of the ${cal O}(ep^2)$ results for the scalar integrals, the spin algebra and the Passarino-Veltman decomposition. The one-loop matrix elements will be needed as input in the determination of the loop-by-loop part of NNLO for the hadronic heavy flavor production.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
Charmless charged two-body B decays are sensitive probes of the CKM matrix, that parameterize CP violation in the Standard Model (SM), and have the potential to reveal the presence of New Physics. The framework of CP violation within the SM, the role of the CKM matrix, with its basic formalism, and the current experimental status are presented. The theoretical tools commonly used to deal with hadronic B decays and an overview of the phenomenology of charmless two-body B decays are outlined. LHCb is one of the four main experiments operating at the Large Hadron Collider (LHC), devoted to the measurement of CP violation and rare decays of charm and beauty hadrons. The LHCb detector is described, focusing on the technologies adopted for each sub-detector and summarizing their performances. The status-of-the-art of the LHCb measurements with charmless two-body B decays is then presented. Using the 37/pb of integrated luminosity collected at sqrt(s) = 7 TeV by LHCb during 2010, the direct CP asymmetries ACP(B0 -> Kpi) = −0.074 +/- 0.033 +/- 0.008 and ACP(Bs -> piK) = 0.15 +/- 0.19 +/- 0.02 are measured. Using 320/pb of integrated luminosity collected during 2011 these measurements are updated to ACP(B0 -> Kpi) = −0.088 +/- 0.011 +/- 0.008 and ACP(Bs -> piK) = 0.27 +/- 0.08 +/- 0.02. In addition, the branching ratios BR(B0 -> K+K-) = (0.13+0.06-0.05 +/- 0.07) x 10^-6 and BR(Bs -> pi+pi-) = (0.98+0.23-0.19 +/- 0.11) x 10^-6 are measured. Finally, using a sample of 370/pb of integrated luminosity collected during 2011, the relative branching ratios BR(B0 -> pi+pi-)/BR(B0 -> Kpi) = 0.262 +/- 0.009 +/- 0.017, (fs/fd)BR(Bs -> K+K-)/BR(B0 -> Kpi)=0.316 +/- 0.009 +/- 0.019, (fs/fd)BR(Bs -> piK)/BR(B0 -> Kpi) = 0.074 +/- 0.006 +/- 0.006 and BR(Lambda_b -> ppi)/BR(Lambda_b -> pK)=0.86 +/- 0.08 +/- 0.05 are determined.
Resumo:
In this thesis the analysis to reconstruct the transverse momentum p_{t} spectra for pions, kaons and protons identified with the TOF detector of the ALICE experiment in pp Minimum Bias collisions at $\sqrt{s}=7$ TeV was reported.
After a detailed description of all the parameters which influence the TOF PID performance (time resolution, calibration, alignment, matching efficiency, time-zero of the event) the method used to identify the particles, the unfolding procedure, was discussed. With this method, thanks also to the excellent TOF performance, the pion and kaon spectra can be reconstructed in the 0.5
Resumo:
The surprising discovery of the X(3872) resonance by the Belle experiment in 2003, and subsequent confirmation by BaBar, CDF and D0, opened up a new chapter of QCD studies and puzzles. Since then, detailed experimental and theoretical studies have been performed in attempt to determine and explain the proprieties of this state. Since the end of 2009 the world’s largest and highest-energy particle accelerator, the Large Hadron Collider (LHC), started its operations at the CERN laboratories in Geneva. One of the main experiments at LHC is CMS (Compact Muon Solenoid), a general purpose detector projected to address a wide range of physical phenomena, in particular the search of the Higgs boson, the only still unconfirmed element of the Standard Model (SM) of particle interactions and, new physics beyond the SM itself. Even if CMS has been designed to study high energy events, it’s high resolution central tracker and superior muon spectrometer made it an optimal tool to study the X(3872) state. In this thesis are presented the results of a series of study on the X(3872) state performed with the CMS experiment. Already with the first year worth of data, a clear peak for the X(3872) has been identified, and the measurement of the cross section ratio with respect to the Psi(2S) has been performed. With the increased statistic collected during 2011 it has been possible to study, in bins of transverse momentum, the cross section ratio between X(3872) and Psi(2S) and separate their prompt and non-prompt component.
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Resumo:
Am COMPASS-Experiment am CERN-SPS wird die Spinsstruktur des Nukleons mit Hilfe der Streuung von polarisierten Myonen an polarisierten Nukleonen untersucht. Der in der inklusiven tiefinelastischen Streuung gemessene Beitrag der Quarks zum Nukleonspin reicht nicht aus, um den Spin des Nukleons zu erklären. Daher soll geklärt werden, wie die Gluonpolarisation und die Bahndrehimpulse von Quarks und Gluonen zum Gesamtspin des Nukleons beitragen. Da sich die Gluonpolarisation aus der $Q^{2}$-Abhängigkeit der Asymmetrien in der inklusiven Streuung nur abschätzen lässt, wird eine direkte Messung der Gluonpolarisation benötigt. Die COMPASS-Kollaboration bestimmt daher die Wirkungsquerschnittsasymmetrien für Photon-Gluon-Fusionprozesse, indem sie zum einen die offene Charmproduktion und zum anderen die Produktion von Hadronpaaren mit großen Transversalimpulsen verwendet. In dieser Arbeit wird die Messung der Gluonpolarisation mit den COMPASS-Daten der Jahre 2003 und 2004 vorgestellt. Für die Analyse werden die Ereignisse mit großem Impulsübertrag ($Q^{2}>1$ $GeV^{2}/c^{2}$) und mit Hadronpaaren mit großem Transversalimpuls ($p_{perp}>0.7$ $GeV/c$) verwendet. Die Photon-Nukleon-Asymmetrie wurde aus dem gewichteten Doppelverhältnis der selektierten Ereignisse bestimmt. Der Schnitt auf $p_{perp}>0.7$rn$GeV/c$ unterdrückt die Prozesse führender Ordnung und QCD-Compton Prozesse, so dass die Asymmetrie direkt mit der Gluonpolarisation über die Analysierstärke verknüpft ist. Der gemessene Wert ist sehr klein und verträglich mit einer verschwindenden Gluonpolarisation. Zur Vermeidung von falschen Asymmetrien aufgrund der Änderung der Detektorakzeptanz wurden Doppelverhältnisse untersucht, bei denen sich der Wirkungsquerschnitt aufhebt und nur die Detektorasymmetrien übrig bleiben. Es konnte gezeigt werden, dass das COMPASS-Spektrometer keine signifikante Zeitabhängigkeit aufweist. Für die Berechnung der Analysierstärke wurden Monte Carlo Ereignisse mit Hilfe des LEPTO-Generators und des COMGeant Software Paketes erzeugt. Dabei ist eine gute Beschreibung der Daten durch das Monte Carlo sehr wichtig. Dafür wurden zur Verbesserung der Beschreibung JETSET Parameter optimiert. Es ergab sich ein Wert von rn$frac{Delta G}{G}=0.054pm0.145_{(stat)}pm0.131_{(sys)}pm0.04_{(MC)}$ bei einem mittleren Impulsbruchteil von $langle x_{gluon}rangle=0.1$ und $langle Q^{2}rangle=1.9$ $GeV^{2}/c^{2}$. Dieses Ergebnis deutet auf eine sehr kleine Gluonpolarisation hin und steht im Einklang mit den Ergebnissen anderer Methoden, wie offene Charmproduktion und mit den Ergebnissen, die am doppelt polarisierten RHIC Collider am BNL erzielt wurden.
Resumo:
Im Rahmen des A4-Experiments werden die Beiträge des Strange-Quarks zu den elektromagnetischen Formfaktoren des Protons gemessen. Solche Seequarkeffekte bei Niederenergieobservablen sind für das Verständnis der Hadronenstruktur wichtig, denn sie stellen eine direkte Manifestation der QCD-Freiheitsgrade im nichtperturbativen Bereich dar.rnrnLinearkombinationen der Strangeness-Vektorformfaktoren des Protons $G_E^s$ und $G_M^s$ sind experimentell über die Messung der paritätsverletzenden Asymmetrie im Wirkungsquerschnitt der elastischen Streuung longitudinal polarisierter Elektronen an unpolarisierten Nukleonen zugänglich. Vor dieser Arbeit hatte die A4-Kollaboration zwei solche Messungen unter Vorwärtsstreuwinkeln bei den Viererimpulsübertägen $Q^2$ von jeweils 0.23 und 0.10 (GeV/c)$^2$ veröffentlicht. Um die Separation von $G_E^s$ und $G_M^s$ beim höheren $Q^2$-Wert zu erhalten, wurde eine Messung unter Rückwärtswinkeln mit der Strahlenergie von 315 MeV durchgeführt.rnrnIm A4-Experiment werden die an einem Flüssigwasserstoff-Target gestreuten Elektronen eines longitudinal polarisierten Strahls mit einem Cherenkov-Kalorimeter einzeln gezählt. Durch die kalorimetrische Energiemessung erfolgt die Trennung der elastischen von den inelastischen Ereignissen. Bei Rückwärtswinkeln wurde dieses Apparat mit einem Szintillator als Elektronentagger erweitert, um den $\gamma$-Untergrund aus dem $\pi^0$-Zerfall zu unterdrücken.rnrnUm die Auswertung dieser Messung zu ermöglichen, wurden im Rahmen dieser Arbeit die gemessenen Energiespektren anhand von ausführlichen Simulationen der Streuprozesse und des Antwortverhaltens der Detektoren untersucht, und eine Methode zur Behandlung des restlichen Untergrunds aus der $\gamma$-Konversionrnvor dem Szintillator entwickelt. Die Simulationergebnisse sind auf dem 5%-Niveau mit den Messungen verträglich, und es wurde bewiesen, dass die Methode der Untergrundbehandlung anwendbar ist.rnrnDie Asymmetriemessung bei Rückwärtswinkeln, die man nach Anwendung der hier erarbeiteten Untergrundbehandlung erhält, wurde für die Separation von $G_E^s$ und $G_M^s$ bei $Q^2$=0.22 (GeV/c)^2 mit der Vorwärtswinkelmessung beim selbenrn$Q^2$ kombiniert. Es ergeben sich die Werte:rnrn$G_M^s$= -0.14 ± 0.11_{exp} ± 0.11_{theo} undrn$G_E^s$= 0.050 ± 0.038_{exp} ± 0.019_{theo}, rnrnwobei die systematische Unsicherheit wegen der Untergrundbehandlung im experimentellen Fehler enthalten ist. Am Ende der Arbeit werden die aus diesen Resultaten folgenden Rückschlüsse auf den Einfluss der Strangeness auf die statischen elektromagnetischen Eigenschaften des Protons diskutiert.rn
Resumo:
The conventional way to calculate hard scattering processes in perturbation theory using Feynman diagrams is not efficient enough to calculate all necessary processes - for example for the Large Hadron Collider - to a sufficient precision. Two alternatives to order-by-order calculations are studied in this thesis.rnrnIn the first part we compare the numerical implementations of four different recursive methods for the efficient computation of Born gluon amplitudes: Berends-Giele recurrence relations and recursive calculations with scalar diagrams, with maximal helicity violating vertices and with shifted momenta. From the four methods considered, the Berends-Giele method performs best, if the number of external partons is eight or bigger. However, for less than eight external partons, the recursion relation with shifted momenta offers the best performance. When investigating the numerical stability and accuracy, we found that all methods give satisfactory results.rnrnIn the second part of this thesis we present an implementation of a parton shower algorithm based on the dipole formalism. The formalism treats initial- and final-state partons on the same footing. The shower algorithm can be used for hadron colliders and electron-positron colliders. Also massive partons in the final state were included in the shower algorithm. Finally, we studied numerical results for an electron-positron collider, the Tevatron and the Large Hadron Collider.
Resumo:
This thesis reports on the creation and analysis of many-body states of interacting fermionic atoms in optical lattices. The realized system can be described by the Fermi-Hubbard hamiltonian, which is an important model for correlated electrons in modern condensed matter physics. In this way, ultra-cold atoms can be utilized as a quantum simulator to study solid state phenomena. The use of a Feshbach resonance in combination with a blue-detuned optical lattice and a red-detuned dipole trap enables an independent control over all relevant parameters in the many-body hamiltonian. By measuring the in-situ density distribution and doublon fraction it has been possible to identify both metallic and insulating phases in the repulsive Hubbard model, including the experimental observation of the fermionic Mott insulator. In the attractive case, the appearance of strong correlations has been detected via an anomalous expansion of the cloud that is caused by the formation of non-condensed pairs. By monitoring the in-situ density distribution of initially localized atoms during the free expansion in a homogeneous optical lattice, a strong influence of interactions on the out-of-equilibrium dynamics within the Hubbard model has been found. The reported experiments pave the way for future studies on magnetic order and fermionic superfluidity in a clean and well-controlled experimental system.
Resumo:
The Standard Model of elementary particle physics was developed to describe the fundamental particles which constitute matter and the interactions between them. The Large Hadron Collider (LHC) at CERN in Geneva was built to solve some of the remaining open questions in the Standard Model and to explore physics beyond it, by colliding two proton beams at world-record centre-of-mass energies. The ATLAS experiment is designed to reconstruct particles and their decay products originating from these collisions. The precise reconstruction of particle trajectories plays an important role in the identification of particle jets which originate from bottom quarks (b-tagging). This thesis describes the step-wise commissioning of the ATLAS track reconstruction and b-tagging software and one of the first measurements of the b-jet production cross section in pp collisions at sqrt(s)=7 TeV with the ATLAS detector. The performance of the track reconstruction software was studied in great detail, first using data from cosmic ray showers and then collisions at sqrt(s)=900 GeV and 7 TeV. The good understanding of the track reconstruction software allowed a very early deployment of the b-tagging algorithms. First studies of these algorithms and the measurement of the b-tagging efficiency in the data are presented. They agree well with predictions from Monte Carlo simulations. The b-jet production cross section was measured with the 2010 dataset recorded by the ATLAS detector, employing muons in jets to estimate the fraction of b-jets. The measurement is in good agreement with the Standard Model predictions.
Resumo:
In this thesis, the phenomenology of the Randall-Sundrum setup is investigated. In this context models with and without an enlarged SU(2)_L x SU(2)_R x U(1)_X x P_{LR} gauge symmetry, which removes corrections to the T parameter and to the Z b_L \bar b_L coupling, are compared with each other. The Kaluza-Klein decomposition is formulated within the mass basis, which allows for a clear understanding of various model-specific features. A complete discussion of tree-level flavor-changing effects is presented. Exact expressions for five dimensional propagators are derived, including Yukawa interactions that mediate flavor-off-diagonal transitions. The symmetry that reduces the corrections to the left-handed Z b \bar b coupling is analyzed in detail. In the literature, Randall-Sundrum models have been used to address the measured anomaly in the t \bar t forward-backward asymmetry. However, it will be shown that this is not possible within a natural approach to flavor. The rare decays t \to cZ and t \to ch are investigated, where in particular the latter could be observed at the LHC. A calculation of \Gamma_{12}^{B_s} in the presence of new physics is presented. It is shown that the Randall-Sundrum setup allows for an improved agreement with measurements of A_{SL}^s, S_{\psi\phi}, and \Delta\Gamma_s. For the first time, a complete one-loop calculation of all relevant Higgs-boson production and decay channels in the custodial Randall-Sundrum setup is performed, revealing a sensitivity to large new-physics scales at the LHC.