19 resultados para non-standard neutrino interactions
em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha
Resumo:
Precision measurements of phenomena related to fermion mixing require the inclusion of higher order corrections in the calculation of corresponding theoretical predictions. For this, a complete renormalization scheme for models that allow for fermion mixing is highly required. The correct treatment of unstable particles makes this task difficult and yet, no satisfactory and general solution can be found in the literature. In the present work, we study the renormalization of the fermion Lagrange density with Dirac and Majorana particles in models that involve mixing. The first part of the thesis provides a general renormalization prescription for the Lagrangian, while the second one is an application to specific models. In a general framework, using the on-shell renormalization scheme, we identify the physical mass and the decay width of a fermion from its full propagator. The so-called wave function renormalization constants are determined such that the subtracted propagator is diagonal on-shell. As a consequence of absorptive parts in the self-energy, the constants that are supposed to renormalize the incoming fermion and the outgoing antifermion are different from the ones that should renormalize the outgoing fermion and the incoming antifermion and not related by hermiticity, as desired. Instead of defining field renormalization constants identical to the wave function renormalization ones, we differentiate the two by a set of finite constants. Using the additional freedom offered by this finite difference, we investigate the possibility of defining field renormalization constants related by hermiticity. We show that for Dirac fermions, unless the model has very special features, the hermiticity condition leads to ill-defined matrix elements due to self-energy corrections of external legs. In the case of Majorana fermions, the constraints for the model are less restrictive. Here one might have a better chance to define field renormalization constants related by hermiticity. After analysing the complete renormalized Lagrangian in a general theory including vector and scalar bosons with arbitrary renormalizable interactions, we consider two specific models: quark mixing in the electroweak Standard Model and mixing of Majorana neutrinos in the seesaw mechanism. A counter term for fermion mixing matrices can not be fixed by only taking into account self-energy corrections or fermion field renormalization constants. The presence of unstable particles in the theory can lead to a non-unitary renormalized mixing matrix or to a gauge parameter dependence in its counter term. Therefore, we propose to determine the mixing matrix counter term by fixing the complete correction terms for a physical process to experimental measurements. As an example, we calculate the decay rate of a top quark and of a heavy neutrino. We provide in each of the chosen models sample calculations that can be easily extended to other theories.
Resumo:
For the safety assessment of radioactive waste, the possibility of radionuclide migration has to be considered. Since Np (and also Th due to the long-lived 232-Th) will be responsible for the greatest amount of radioactivity one million years after discharge from the reactor, its (im)-mobilization in the geosphere is of great importance. Furthermore, the chemistry of Np(V) is quite similar (but not identical) to the chemistry of Pu(V). Three species of neptunium may be found in the near field of the waste disposal, but pentavalent neptunium is the most abundant species under a wide range of natural conditions. Within this work, the interaction of Np(V) with the clay mineral montmorillonite and melanodins (as model substances for humic acids) was studied. The sorption of neptunium onto gibbsite, a model clay for montmorillonite, was also investigated. The sorption of neptunium onto γ-alumina and montmorillonite was studied in a parallel doctoral work by S. Dierking. Neptunium is only found in ultra trace amounts in the environment. Therefore, sensitive and specific methods are needed for its determination. The sorption was determined by γ spectroscopy and LSC for the whole concentration range studied. In addition the combination of these techniques with ultrafiltration allowed the study of Np(V) complexation with melanoidins. Regrettably, the available speciation methods (e.g. CE-ICP-MS and EXAFS) are not capable to detect the environmentally relevant neptunium concentrations. Therefore, a combination of batch experiments and speciation analyses was performed. Further, the preparation of hybrid clay-based materials (HCM) montmorillonitemelanoidins for sorption studies was achieved. The formation of hybrid materials begins in the interlayers of the montmorillonite, and then the organic material spreads over the surface of the mineral. The sorption of Np onto HCM was studied at the environmentally relevant concentrations and the results obtained were compared with those predicted by the linear additive model by Samadfam. The sorption of neptunium onto gibbsite was studied in batch experiments and the sorption maximum determined at pH~8.5. The sorption isotherm pointed to the presence of strong and weak sorption sites in gibbsite. The Np speciation was studied by using EXAFS, which showed that the sorbed species was Np(V). The influence of M42 type melanodins on the sorption of Np(V) onto montmorillonite was also investigated at pH 7. The sorption of the melanoidins was affected by the order in which the components were added and by ionic strength. The sorption of Np was affected by ionic strength, pointing to outer sphere sorption, whereas the presence of increasing amounts of melanoidins had little influence on Np sorption.
Resumo:
This thesis presents new methods to simulate systems with hydrodynamic and electrostatic interactions. Part 1 is devoted to computer simulations of Brownian particles with hydrodynamic interactions. The main influence of the solvent on the dynamics of Brownian particles is that it mediates hydrodynamic interactions. In the method, this is simulated by numerical solution of the Navier--Stokes equation on a lattice. To this end, the Lattice--Boltzmann method is used, namely its D3Q19 version. This model is capable to simulate compressible flow. It gives us the advantage to treat dense systems, in particular away from thermal equilibrium. The Lattice--Boltzmann equation is coupled to the particles via a friction force. In addition to this force, acting on {it point} particles, we construct another coupling force, which comes from the pressure tensor. The coupling is purely local, i.~e. the algorithm scales linearly with the total number of particles. In order to be able to map the physical properties of the Lattice--Boltzmann fluid onto a Molecular Dynamics (MD) fluid, the case of an almost incompressible flow is considered. The Fluctuation--Dissipation theorem for the hybrid coupling is analyzed, and a geometric interpretation of the friction coefficient in terms of a Stokes radius is given. Part 2 is devoted to the simulation of charged particles. We present a novel method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. This algorithm scales linearly, too. We focus on the Molecular Dynamics version of the method and show that it is intimately related to the Car--Parrinello approach, while being equivalent to solving Maxwell's equations with freely adjustable speed of light. The Lagrangian formulation of the coupled particles--fields system is derived. The quasi--Hamiltonian dynamics of the system is studied in great detail. For implementation on the computer, the equations of motion are discretized with respect to both space and time. The discretization of the electromagnetic fields on a lattice, as well as the interpolation of the particle charges on the lattice is given. The algorithm is as local as possible: Only nearest neighbors sites of the lattice are interacting with a charged particle. Unphysical self--energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green's function. The method allows easy parallelization using standard domain decomposition. Some benchmarking results of the algorithm are presented and discussed.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
Die beiden in dieser Arbeit betrachteten Systeme, wässrige Lösungen von Ionen und ionische Flüssigkeiten, zeigen vielfältige Eigenschaften und Anwendungsmöglichkeiten, im Gegensatz zu anderen Systemen. Man findet sie beinahe überall im normalen Leben (Wasser), oder ihre Bedeutung wächst (ioinische Flüssigkeiten). Der elektronische Anteil und der atomare Anteil wurden getrennt voneinander untersucht und im Zusammenhang analysiert. Mittels dieser Methode konnten die in dem jeweiligen System auftretenden Mechanismen genauer untersucht werden. Diese Methode wird "Multiscale Modeling" genannt, dabei werden die Untereinheiten eines Systems genauer betrachtet, wie in diesem Fall die elektronischen and atomaren Teilsystem. Die Ergebnisse, die aus den jeweiligen Betrachtungen hervorgehen, zeigen, dass, im Falle von hydratisierten Ionen die Wasser-Wasser Wechselwirkungen wesentlich stärker sind als die elektrostatischen Wechselwirkung zwischen Wasser und dem Ion. Anhand der Ergebnisse ergibt sich, dass normale nicht-polarisierbare Modelle ausreichen, um Ionen-Wasser Lösungen zu beschreiben. Im Falle der ionischen Flüssigkeiten betrachten wir die elektronische Ebene mittels sehr genauer post-Hartree-Fock Methoden und DFT, deren Ergebnisse dann mit denen auf molekularer Ebene (mithilfe von CPMD/klassischer MD) in Beziehung gesetzt werden. Die bisherigen Ergebnisse zeigen, dass die Wasserstoff-Brückenbindungen im Fall der ionischen Flüssigkeiten nicht vernachässigt werden können. Weiterhin hat diese Studie herausgefunden, dass die klassischen Kraftfelder die Elektrostatik (Dipol- und Quadrupolmomente) nicht genau genug beschreibt. Die Kombination des mikroskopischen Mechanismus und der molekularen Eigenschaften ist besonders sinnvoll um verschiedene Anhaltspunkte von Simualtionen (z.B. mit klassische Molekular-Dynamik) oder Experimenten zu liefern oder solche zu erklären.
Resumo:
DcuS is a membrane-integral sensory histidine kinase involved in the DcuSR two-component regulatory system in Escherichia coli by regulating the gene expression of C4-dicarboxylate metabolism in response to external stimuli. How DcuS mediates the signal transduction across the membrane remains little understood. This study focused on the oligomerization and protein-protein interactions of DcuS by using quantitative Fluorescence Resonance Energy Transfer (FRET) spectroscopy. A quantitative FRET analysis for fluorescence spectroscopy has been developed in this study, consisting of three steps: (1) flexible background subtraction to yield background-free spectra, (2) a FRET quantification method to determine FRET efficiency (E) and donor fraction (fD = [donor] / ([donor]+[acceptor])) from the spectra, and (3) a model to determine the degree of oligomerization (interaction stoichiometry) in the protein complexes based on E vs. fD. The accuracy and applicability of this analysis was validated by theoretical simulations and experimental systems. These three steps were integrated into a computer procedure as an automatic quantitative FRET analysis which is easy, fast, and allows high-throughout to quantify FRET accurately and robustly, even in living cells. This method was subsequently applied to investigate oligomerization and protein-protein interactions, in particular in living cells. Cyan (CFP) and yellow fluorescent protein (YFP), two spectral variants of green fluorescent protein, were used as a donor-acceptor pair for in vivo measurements. Based on CFP- and YFP-fusions of non-interacting membrane proteins in the cell membrane, a minor FRET signal (E = 0.06 ± 0.01) can be regarded as an estimate of direct interaction between CFP and YFP moieties of fusion proteins co-localized in the cell membrane (false-positive). To confirm if the FRET occurrence is specific to the interaction of the investigated proteins, their FRET efficiency should be clearly above E = 0.06. The oligomeric state of DcuS was examined both in vivo (CFP/YFP) and in vitro (two different donor-acceptor pairs of organic dyes) by three independent experimental systems. The consistent occurrence of FRET in vitro and in vivo provides the evidence for the homo-dimerization of DcuS as full-length protein for the first time. Moreover, novel interactions (hetero-complexes) between DcuS and its functionally related proteins, citrate-specific sensor kinase CitA and aerobic dicarboxylate transporter DctA respectively, have been identified for the first time by intermolecular FRET in vivo. This analysis can be widely applied as a robust method to determine the interaction stoichiometry of protein complexes for other proteins of interest labeled with adequate fluorophores in vitro or in vivo.
Resumo:
Der Lichtsammelkomplex II (LHCII) höherer Pflanzen ist eines der häufigsten Membranproteine der Welt. Er bindet 14 Chlorophylle und 4 Carotinoide nicht kovalent und fungiert in vivo als Lichtantenne des Photosystems II. Eine optimale Absorption von Licht ist auch bei Solarzellen entscheidend und es liegt nahe hier dasselbe Prinzip zu verwenden. Dafür bietet sich der Einsatz biologischer Komponenten wie des LHCII an. Dieser wurde evolutionär für eine effektive Absorption und Weiterleitung von Sonnenenergie optimiert. Zusätzlich lässt er sich in vitro in rekombinanter Form rekonstituieren. Für eine eventuelle Nutzung des LHCII in technologischen Anwendungen bedarf es der Interaktion mit anderen, vorzugsweise synthetischen Komponenten. Daher wurde die Bindung und der Energietransfer zwischen dem LHCII und organischen Fluoreszenzfarbstoffen sowie anorganischen „Quantum dots“ (QDs) untersucht. rnMit Donorfarbstoffen wurde die Grünlücke des LHCII funktionell geschlossen. Dafür wurden bis zu vier Fluoreszenzfarbstoffe kovalent an den LHCII gebunden. Diese Interaktion erfolgte sowohl mit Maleimiden an Cysteinen als auch mit N-Hydroxysuccinimidylestern an Lysinen. Die Assemblierung, Struktur und Funktion des Pigment-Protein-Komplexes wurde durch die Fluoreszenzfarbstoffe nicht gestört.rnAuf der Suche nach einem Farbstoff, der als Akzeptor die vom LHCII aufgenommene Energie übernimmt und durch Elektronenabgabe in elektrische Energie umwandelt, wurden drei Rylenfarbstoffe, ein Quaterrylen und zwei Terrylene, untersucht. Der LHCII konnte mit allen Farbstoffen erfolgreich markiert werden. Für die Nutzung der Hybridkomplexe ergaben sich allerdings Probleme. Das Quaterrylen beeinträchtigte aufgrund seiner Hydrophobizität die Rekonstitution des Proteins, während bei beiden Terrylenen der Energietransfer ineffizient war.rn Zusätzlich zu den Standard-Verknüpfungen zwischen Farbstoffen und Proteinen wurde in dieser Arbeit die „native chemische Ligation“ etabliert. Hierfür wurde eine LHCII-Mutante mit N-terminalem Cystein hergestellt, markiert und rekonstituiert. Messdaten an diesem Hybridkomplex ließen auf einen Energietransfer zwischen Farbstoff und Protein schließen. rnIn Hybridkomplexen sollen langfristig zur Ladungstrennung fähige Typ II-QDs Anwendung finden, wobei der LHCII als Lichtantenne dienen soll. Bis diese QDs verwendet werden können, wurden grundlegende Fragen der Interaktion beider Materialen an Typ I-QDs mit Energietransfer zum LHCII untersucht. Dabei zeigte sich, dass QDs in wässriger Lösung schnell aggregieren und entsprechende Kontrollen wichtig sind. Weiterführend konnte anhand der Trennung von ungebundenem und QD-gebundenem LHCII die Bindung von LHCII an QDs bestätigt werden. Dabei wurden Unterschiede in der Bindungseffizienz in Abhängigkeit der verwendeten LHCII und QDs festgestellt. Durch Herstellung von Fusionsproteinen aus LHCII und Affinitätspeptiden konnte die Bindung optimiert werden. Ein Energietransfer von QDs zu LHCII war nicht sicher nachzuweisen, da in den Hybridkomplexen zwar die QD- (Donor-) Fluoreszenz gelöscht, aber die LHCII- (Akzeptor-) Fluoreszenz nicht entsprechend stimuliert wurde.rnZusammenfassend wurden in dieser Arbeit einige Hybridkomplexe hergestellt, die in weiterführenden Ansätzen Verwendung finden können. Auf die hier gewonnenen Erkenntnisse über Interaktionen zwischen LHCII und synthetischen Materialien kann jetzt weiter aufgebaut werden.
Resumo:
Nuclear masses are an important quantity to study nuclear structure since they reflect the sum of all nucleonic interactions. Many experimental possibilities exist to precisely measure masses, out of which the Penning trap is the tool to reach the highest precision. Moreover, absolute mass measurements can be performed using carbon, the atomic-mass standard, as a reference. The new double-Penning trap mass spectrometer TRIGA-TRAP has been installed and commissioned within this thesis work, which is the very first experimental setup of this kind located at a nuclear reactor. New technical developments have been carried out such as a reliable non-resonant laser ablation ion source for the production of carbon cluster ions and are still continued, like a non-destructive ion detection technique for single-ion measurements. Neutron-rich fission products will be available by the reactor that are important for nuclear astrophysics, especially the r-process. Prior to the on-line coupling to the reactor, TRIGA-TRAP already performed off-line mass measurements on stable and long-lived isotopes and will continue this program. The main focus within this thesis was on certain rare-earth nuclides in the well-established region of deformation around N~90. Another field of interest are mass measurements on actinoids to test mass models and to provide direct links to the mass standard. Within this thesis, the mass of 241-Am could be measured directly for the first time.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
The beta-decay of free neutrons is a strongly over-determined process in the Standard Model (SM) of Particle Physics and is described by a multitude of observables. Some of those observables are sensitive to physics beyond the SM. For example, the correlation coefficients of the involved particles belong to them. The spectrometer aSPECT was designed to measure precisely the shape of the proton energy spectrum and to extract from it the electron anti-neutrino angular correlation coefficient "a". A first test period (2005/ 2006) showed the “proof-of-principles”. The limiting influence of uncontrollable background conditions in the spectrometer made it impossible to extract a reliable value for the coefficient "a" (publication: Baessler et al., 2008, Europhys. Journ. A, 38, p.17-26). A second measurement cycle (2007/ 2008) aimed to under-run the relative accuracy of previous experiments (Stratowa et al. (1978), Byrne et al. (2002)) da/a =5%. I performed the analysis of the data taken there which is the emphasis of this doctoral thesis. A central point are background studies. The systematic impact of background on a was reduced to da/a(syst.)=0.61 %. The statistical accuracy of the analyzed measurements is da/a(stat.)=1.4 %. Besides, saturation effects of the detector electronics were investigated which were initially observed. These turned out not to be correctable on a sufficient level. An applicable idea how to avoid the saturation effects will be discussed in the last chapter.
Resumo:
Während das Standardmodell der Elementarteilchenphysik eine konsistente, renormierbare Quantenfeldtheorie dreier der vier bekannten Wechselwirkungen darstellt, bleibt die Quantisierung der Gravitation ein bislang ungelöstes Problem. In den letzten Jahren haben sich jedoch Hinweise ergeben, nach denen metrische Gravitation asymptotisch sicher ist. Das bedeutet, daß sich auch für diese Wechselwirkung eine Quantenfeldtheorie konstruieren läßt. Diese ist dann in einem verallgemeinerten Sinne renormierbar, der nicht mehr explizit Bezug auf die Störungstheorie nimmt. Zudem sagt dieser Zugang, der auf der Wilsonschen Renormierungsgruppe beruht, die korrekte mikroskopische Wirkung der Theorie voraus. Klassisch ist metrische Gravitation auf dem Niveau der Vakuumfeldgleichungen äquivalent zur Einstein-Cartan-Theorie, die das Vielbein und den Spinzusammenhang als fundamentale Variablen verwendet. Diese Theorie besitzt allerdings mehr Freiheitsgrade, eine größere Eichgruppe, und die zugrundeliegende Wirkung ist von erster Ordnung. Alle diese Eigenschaften erschweren eine zur metrischen Gravitation analoge Behandlung.rnrnIm Rahmen dieser Arbeit wird eine dreidimensionale Trunkierung von der Art einer verallgemeinerten Hilbert-Palatini-Wirkung untersucht, die neben dem Laufen der Newton-Konstante und der kosmologischen Konstante auch die Renormierung des Immirzi-Parameters erfaßt. Trotz der angedeuteten Schwierigkeiten war es möglich, das Spektrum des freien Hilbert-Palatini-Propagators analytisch zu berechnen. Auf dessen Grundlage wird eine Flußgleichung vom Propertime-Typ konstruiert. Zudem werden geeignete Eichbedingungen gewählt und detailliert analysiert. Dabei macht die Struktur der Eichgruppe eine Kovariantisierung der Eichtransformationen erforderlich. Der resultierende Fluß wird für verschiedene Regularisierungsschemata und Eichparameter untersucht. Dies liefert auch im Einstein-Cartan-Zugang berzeugende Hinweise auf asymptotische Sicherheit und damit auf die mögliche Existenz einer mathematisch konsistenten und prädiktiven fundamentalen Quantentheorie der Gravitation. Insbesondere findet man ein Paar nicht-Gaußscher Fixpunkte, das Anti-Screening aufweist. An diesen sind die Newton-Konstante und die kosmologische Konstante jeweils relevante Kopplungen, wohingegen der Immirzi-Parameter an einem Fixpunkt irrelevant und an dem anderen relevant ist. Zudem ist die Beta-Funktion des Immirzi-Parameters von bemerkenswert einfacher Form. Die Resultate sind robust gegenüber Variationen des Regularisierungsschemas. Allerdings sollten zukünftige Untersuchungen die bestehenden Eichabhängigkeiten reduzieren.
Resumo:
In this thesis, a systematic analysis of the bar B to X_sgamma photon spectrum in the endpoint region is presented. The endpoint region refers to a kinematic configuration of the final state, in which the photon has a large energy m_b-2E_gamma = O(Lambda_QCD), while the jet has a large energy but small invariant mass. Using methods of soft-collinear effective theory and heavy-quark effective theory, it is shown that the spectrum can be factorized into hard, jet, and soft functions, each encoding the dynamics at a certain scale. The relevant scales in the endpoint region are the heavy-quark mass m_b, the hadronic energy scale Lambda_QCD and an intermediate scale sqrt{Lambda_QCD m_b} associated with the invariant mass of the jet. It is found that the factorization formula contains two different types of contributions, distinguishable by the space-time structure of the underlying diagrams. On the one hand, there are the direct photon contributions which correspond to diagrams with the photon emitted directly from the weak vertex. The resolved photon contributions on the other hand arise at O(1/m_b) whenever the photon couples to light partons. In this work, these contributions will be explicitly defined in terms of convolutions of jet functions with subleading shape functions. While the direct photon contributions can be expressed in terms of a local operator product expansion, when the photon spectrum is integrated over a range larger than the endpoint region, the resolved photon contributions always remain non-local. Thus, they are responsible for a non-perturbative uncertainty on the partonic predictions. In this thesis, the effect of these uncertainties is estimated in two different phenomenological contexts. First, the hadronic uncertainties in the bar B to X_sgamma branching fraction, defined with a cut E_gamma > 1.6 GeV are discussed. It is found, that the resolved photon contributions give rise to an irreducible theory uncertainty of approximately 5 %. As a second application of the formalism, the influence of the long-distance effects on the direct CP asymmetry will be considered. It will be shown that these effects are dominant in the Standard Model and that a range of -0.6 < A_CP^SM < 2.8 % is possible for the asymmetry, if resolved photon contributions are taken into account.
Electroweak precision observables and effective four-fermion interactions in warped extra dimensions
Resumo:
In this thesis, we study the phenomenology of selected observables in the context of the Randall-Sundrum scenario of a compactified warpedrnextra dimension. Gauge and matter fields are assumed to live in the whole five-dimensional space-time, while the Higgs sector is rnlocalized on the infrared boundary. An effective four-dimensional description is obtained via Kaluza-Klein decomposition of the five dimensionalrnquantum fields. The symmetry breaking effects due to the Higgs sector are treated exactly, and the decomposition of the theory is performedrnin a covariant way. We develop a formalism, which allows for a straight-forward generalization to scenarios with an extended gauge group comparedrnto the Standard Model of elementary particle physics. As an application, we study the so-called custodial Randall-Sundrum model and compare the resultsrnto that of the original formulation. rnWe present predictions for electroweak precision observables, the Higgs production cross section at the LHC, the forward-backward asymmetryrnin top-antitop production at the Tevatron, as well as the width difference, the CP-violating phase, and the semileptonic CP asymmetry in B_s decays.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
This doctoral thesis was focused on the investigation of enantiomeric and non-enantiomeric biogenic organic compound (BVOC) emissions from both leaf and canopy scales in different environments. In addition, the anthropogenic compounds benzene, toluene, ethylbenzene, and xylenes (BTEX) were studied. BVOCs are emitted into the lower troposphere in large quantities (ca. 1150 Tg C ·yr-1), approximately an order of magnitude greater than the anthropogenic VOCs. BVOCs are particularly important in tropospheric chemistry because of their impact on ozone production and secondary organic aerosol formation or growth. The BVOCs examined in this study were: isoprene, (-)/ (+)-α-pinene, (-)/ (+)-ß-pinene, Δ-3-carene, (-)/ (+)-limonene, myrcene, eucalyptol and camphor, as these were the most abundant BVOCs observed both in the leaf cuvette study and the ambient measurements. In the laboratory cuvette studies, the sensitivity of enantiomeric enrichment change from the leaf emission has been examined as a function of light (0-1600 PAR) and temperature (20-45°C). Three typical Mediterranean plant species (Quercus ilex L., Rosmarinus officinalis L., Pinus halepensis Mill.) with more than three individuals of each have been investigated using a dynamic enclosure cuvette. The terpenoid compound emission rates were found to be directly linked to either light and temperature (e.g. Quercus ilex L.) or mainly to temperature (e.g. Rosmarinus officinalis L., Pinus halepensis Mill.). However, the enantiomeric signature showed no clear trend in response to either the light or temperature; moreover a large variation of enantiomeric enrichment was found during the experiment. This enantiomeric signature was also used to distinguish chemotypes beyond the normal achiral chemical composition method. The results of nineteen Quercus ilex L. individuals, screened under standard conditions (30°C and 1000 PAR) showed four different chemotypes, whereas the traditional classification showed only two. An enclosure branch cuvette set-up was applied in the natural boreal forest environment from four chemotypes of Scots pine (Pinus sylvestris) and one chemotype of Norway spruce (Picea abies) and the direct emissions compared with ambient air measurements above the canopy during the HUMPPA-COPEC 2010 summer campaign. The chirality of a-pinene was dominated by (+)-enantiomers from Scots pine while for Norway spruce the chirality was found to be opposite (i.e. Abstract II (-)-enantiomer enriched) becoming increasingly enriched in the (-)-enantiomer with light. Field measurements over a Spanish stone pine forest were performed to examine the extent of seasonal changes in enantiomeric enrichment (DOMINO 2008). These showed clear differences in chirality of monoterpene emissions. In wintertime the monoterpene (-)-a-pinene was found to be in slight enantiomeric excess over (+)-a-pinene at night but by day the measured ratio was closer to one i.e. racemic. Samples taken the following summer in the same location showed much higher monoterpene mixing ratios and revealed a strong enantiomeric excess of (-)-a-pinene. This indicated a strong seasonal variance in the enantiomeric emission ratio which was not manifested in the day/night temperature cycles in wintertime. A clear diurnal cycle of enantiomeric enrichment in a-pinene was also found over a French oak forest and the boreal forest. However, while in the boreal forest (-)-a-pinene enrichment increased around the time of maximum light and temperature, the French forest showed the opposite tendency with (+)-a-pinene being favored. For the two field campaigns (DOMINO 2008 and HUMPPA-COPEC 2010), the BTEX were also investigated. For the DOMINO campaign, mixing ratios of the xylene isomers (meta- and para-) and ethylbenzene, which are all well resolved on the ß-cyclodextrin column, were exploited to estimate average OH radical exposures to VOCs from the Huelva industrial area. These were compared to empirical estimates of OH based on JNO2 measured at the site. The deficiencies of each estimation method are discussed. For HUMPPA-COPEC campaign, benzene and toluene mixing ratios can clearly define the air mass influenced by the biomass burning pollution plume from Russia.