15 resultados para Two-point boundary value problems
em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha
Resumo:
In various imaging problems the task is to use the Cauchy data of the solutions to an elliptic boundary value problem to reconstruct the coefficients of the corresponding partial differential equation. Often the examined object has known background properties but is contaminated by inhomogeneities that cause perturbations of the coefficient functions. The factorization method of Kirsch provides a tool for locating such inclusions. In this paper, the factorization technique is studied in the framework of coercive elliptic partial differential equations of the divergence type: Earlier it has been demonstrated that the factorization algorithm can reconstruct the support of a strictly positive (or negative) definite perturbation of the leading order coefficient, or if that remains unperturbed, the support of a strictly positive (or negative) perturbation of the zeroth order coefficient. In this work we show that these two types of inhomogeneities can, in fact, be located simultaneously. Unlike in the earlier articles on the factorization method, our inclusions may have disconnected complements and we also weaken some other a priori assumptions of the method. Our theoretical findings are complemented by two-dimensional numerical experiments that are presented in the framework of the diffusion approximation of optical tomography.
Resumo:
The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.
Resumo:
In der vorliegenden Dissertation wurden zwei verschiedene Fragestellungen bearbeitet. Zum einen wurde im Rahmen des Schwerpunktprojektes „Kolloidverfahrenstechnik“ und in Zusammenarbeit mit der Arbeitsgruppe von Prof. Dr. Heike Schuchmann vom KIT in Karlsruhe die Verkapselung von Silika-Nanopartikeln in eine PMMA-Hülle durch Miniemulsionspolymerisation entwickelt und der Aufskalierungsprozess unter Verwendung von Hochdruckhomogenisatoren vorangetrieben. Zum anderen wurden verschiedene fluorierte Nanopartikel durch den Miniemulsionsprozess generiert und ihr Verhalten in Zellen untersucht.rnSilika-Partikel konnten durch Miniemulsionspolymerisation in zwei unterschiedlichen Prozessen erfolgreich verkapselt werden. Bei der ersten Methode wurden zunächst modifizierte Silika-Partikel in einer MMA-Monomerphase dispergiert und anschließend durch den normalen Miniemulsionsprozess Silika-beladene Tröpfchen generiert. Diese konnten zu Komposit-Partikeln polymerisiert werden. Bei der Verkapselung durch den Fission/Fusion Prozess wurden die hydrophobisierten Silika-Partikel durch Fission und Fusion Prozesse in schon vorhandene Monomertröpfchen eingebracht, welche hinterher polymerisiert wurden. Um hydrophiles Silika in einem hydrophoben Monomer zu dispergieren, musste zunächst eine Modifizierung der Silika-Partikel stattfinden. Dies geschah unter anderem über eine chemische Anbindung von 3-Methacryloxypropyltri-methoxysilan an der Oberfläche der Silika-Partikel. Des Weiteren wurden die hydrophilen Silika-Partikel durch Adsorption von CTMA-Cl physikalisch modifiziert. Unter anderem durch die Variation des Verkapselungsprozesses, der Silika-Menge, der Tensidart und –menge und der Comonomere konnten Komposit-Partikel mit unterschiedlichen Morphologien, Größen, und Füllgraden erhalten werden.rnFluorierte Nanopartikel wurden erfolgreich über den Prozess der Miniemulsionspolymerisation synthetisiert. Als Monomere dienten dabei fluorierte Acrylate, fluorierte Methacrylate und fluoriertes Styrol. Es war möglich aus jeder dieser drei Gruppen an Monomeren fluorierte Nanopartikel herzustellen. Für genauere Untersuchungen wurden 2,3,4,5,6-Pentafluorstyrol, 3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,10-Heptadecafluorodecyl-methacrylat und 1H,1H,2H,2H-Perfluorodecylacrylat als Monomere ausgewählt. Als Hydrophob zur Unterdrückung der Ostwaldreifung wurde Perfluromethyldecalin eingesetzt. Die stabilsten Miniemulsionen wurden wiederum mit den ionischen Tensid SDS generiert. Mit steigendem Gehalt an SDS gelöst in der kontinuierlichen Phase, wurde eine Verkleinerung der Partikelgröße festgestellt. Neben den Homopolymerpartikeln wurden auch Copolymerpartikel mit Acrylsäure erfolgreich synthetisiert. Zudem wurde noch das Verhalten der fluorierten Partikel in Zellen überprüft. Die fluorierten Partikel wiesen ein nicht toxisches Verhalten vor. Die Adsorption von Proteinen aus Humanem Serum wurde über ITC Messungen untersucht. rnSomit konnte gezeigt werden, dass die Technik der Miniemulsionspolymerisation eine abwechslungsreiche und effektive Methode ist, um Hybridnanopartikel mit verschiedenen Morphologien und oberflächenfunktionalisierte Nanopartikel erfolgreich zu generieren.rn
Resumo:
Über viele Jahre hinweg wurden wieder und wieder Argumente angeführt, die diskreten Räumen gegenüber kontinuierlichen Räumen eine fundamentalere Rolle zusprechen. Unser Zugangzur diskreten Welt wird durch neuere Überlegungen der Nichtkommutativen Geometrie (NKG) bestimmt. Seit ca. 15Jahren gibt es Anstrengungen und auch Fortschritte, Physikmit Hilfe von Nichtkommutativer Geometrie besser zuverstehen. Nur eine von vielen Möglichkeiten ist dieReformulierung des Standardmodells derElementarteilchenphysik. Unter anderem gelingt es, auch denHiggs-Mechanismus geometrisch zu beschreiben. Das Higgs-Feld wird in der NKG in Form eines Zusammenhangs auf einer zweielementigen Menge beschrieben. In der Arbeit werden verschiedene Ziele erreicht:Quantisierung einer nulldimensionalen ,,Raum-Zeit'', konsistente Diskretisierungf'ur Modelle im nichtkommutativen Rahmen.Yang-Mills-Theorien auf einem Punkt mit deformiertemHiggs-Potenzial. Erweiterung auf eine ,,echte''Zwei-Punkte-Raum-Zeit, Abzählen von Feynman-Graphen in einer nulldimensionalen Theorie, Feynman-Regeln. Eine besondere Rolle werden Termini, die in derQuantenfeldtheorie ihren Ursprung haben, gewidmet. In diesemRahmen werden Begriffe frei von Komplikationen diskutiert,die durch etwaige Divergenzen oder Schwierigkeitentechnischer Natur verursacht werden könnten.Eichfixierungen, Geistbeiträge, Slavnov-Taylor-Identität undRenormierung. Iteratives Lösungsverfahren derDyson-Schwinger-Gleichung mit Computeralgebra-Unterstützung,die Renormierungsprozedur berücksichtigt.
Resumo:
Escherichia coli kann C4-Dicarboxylate und andere Carbonsäuren als Substrate für den aeroben und anaeroben Stoffwechsel nutzen. Die Anwesenheit von C4-Dicarboxylaten im Außenmedium wird über das Zweikomponentensystem DcuSR, bestehend aus der membranständigen Sensorkinase DcuS und dem cytoplasmatischen Responseregulator DcuR, erkannt. Die Bindung von C4-Dicarboxylaten an die periplasmatische Domäne von DcuS führt zu einer Induktion der Zielgene. Hierzu zählen die Gene für den anaeroben Fumarat/Succinat-Antiporter DcuB (dcuB), die anaerobe Fumarase (fumB) und die Fumaratreduktase (frdABCD). Unter aeroben Bedingungen stimuliert DcuSR die Expression des dctA Gens, das für den aeroben C4-Dicarboxylat-Carrier DctA kodiert. Für den Carrier DcuB konnte eine regulatorische Funktion bei der Expression der DcuSR-regulierten Gene gezeigt werden. Die Inaktivierung des dcuB Gens führte bereits ohne Fumarat zu einer maximalen Expression einer dcuB´-´lacZ Reportergenfusion und anderer DcuSR-abhängiger Gene. Diese Stimulierung erfolgte nur in einem dcuS-positiven Hintergrund. DcuB unterscheidet sich damit von den alternativen Carriern DcuA und DcuC, die diesen Effekt nicht zeigten. Mithilfe ungerichteter Mutagenese wurden DcuB-Punktmutanten hergestellt (Thr394Ile und Asp398Asn), die eine Geninduktion verursachten, aber eine intakte Transportfunktion besaßen. Dies zeigt, dass der regulatorische Effekt von DcuB unabhängig von dessen Transportfunktion ist. Durch gerichtete Mutagenese wurde die Funktion einer Punktmutation (Thr394) näher charakterisiert. Es werden zwei Modelle zur Membrantopologie von DcuB und der Lage der Punktmutationen im Protein vorgestellt. Da DcuB seine regulatorische Funktion über eine Interaktion mit DcuS vermitteln könnte, wurden mögliche Wechselwirkungen zwischen DcuB und DcuS als auch DcuR mithilfe von Two-Hybrid-Systemen untersucht. Für biochemische Untersuchungen von DcuB wurde außerdem die Expression des Proteins in vivo und in vitro versucht. Unter aeroben Bedingungen beeinflusst der C4-Dicarboxylat-Carrier DctA die Expression der DcuSR-abhängigen Gene. Eine Mutation des dctA Gens bewirkte eine stärkere Expression einer dctA´-´lacZ Reportergenfusion im Vergleich zum Wildtyp. Diese Expression nahm in einem dcuS-negativen Hintergrund ab, die Succinat-abhängige Induktion blieb jedoch erhalten. Unter anaeroben Bedingungen kann das dctA Gen auch durch Inaktivierung von DcuB induziert werden. Es wird ein Modell vorgestellt, das die Beteiligung beider Carrier an der DcuSR-abhängigen Regulation erklärt.
Resumo:
Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
Monte Carlo simulations are used to study the effect of confinement on a crystal of point particles interacting with an inverse power law potential in d=2 dimensions. This system can describe colloidal particles at the air-water interface, a model system for experimental study of two-dimensional melting. It is shown that the state of the system (a strip of width D) depends very sensitively on the precise boundary conditions at the two ``walls'' providing the confinement. If one uses a corrugated boundary commensurate with the order of the bulk triangular crystalline structure, both orientational order and positional order is enhanced, and such surface-induced order persists near the boundaries also at temperatures where the system in the bulk is in its fluid state. However, using smooth repulsive boundaries as walls providing the confinement, only the orientational order is enhanced, but positional (quasi-) long range order is destroyed: The mean-square displacement of two particles n lattice parameters apart in the y-direction along the walls then crosses over from the logarithmic increase (characteristic for $d=2$) to a linear increase (characteristic for d=1). The strip then exhibits a vanishing shear modulus. These results are interpreted in terms of a phenomenological harmonic theory. Also the effect of incommensurability of the strip width D with the triangular lattice structure is discussed, and a comparison with surface effects on phase transitions in simple Ising- and XY-models is made
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
In electrical impedance tomography, one tries to recover the conductivity inside a physical body from boundary measurements of current and voltage. In many practically important situations, the investigated object has known background conductivity but it is contaminated by inhomogeneities. The factorization method of Andreas Kirsch provides a tool for locating such inclusions. Earlier, it has been shown that under suitable regularity conditions positive (or negative) inhomogeneities can be characterized by the factorization technique if the conductivity or one of its higher normal derivatives jumps on the boundaries of the inclusions. In this work, we use a monotonicity argument to generalize these results: We show that the factorization method provides a characterization of an open inclusion (modulo its boundary) if each point inside the inhomogeneity has an open neighbourhood where the perturbation of the conductivity is strictly positive (or negative) definite. In particular, we do not assume any regularity of the inclusion boundary or set any conditions on the behaviour of the perturbed conductivity at the inclusion boundary. Our theoretical findings are verified by two-dimensional numerical experiments.
Resumo:
Liquids under the influence of external fields exhibit a wide range of intriguing phenomena that can be markedly different from the behaviour of a quiescent system. This work considers two different systems — a glassforming Yukawa system and a colloid-polymer mixture — by Molecular Dynamics (MD) computer simulations coupled to dissipative particle dynamics. The former consists of a 50-50 binary mixture of differently-sized, like-charged colloids interacting via a screened Coulomb (Yukawa) potential. Near the glass transition the influence of an external shear field is studied. In particular, the transition from elastic response to plastic flow is of interest. At first, this model is characterised in equilibrium. Upon decreasing temperature it exhibits the typical dynamics of glassforming liquids, i.e. the structural relaxation time τα grows strongly in a rather small temperature range. This is discussed with respect to the mode-coupling theory of the glass transition (MCT). For the simulation of bulk systems under shear, Lees-Edwards boundary conditions are applied. At constant shear rates γ˙ ≫ 1/τα the relevant time scale is given by 1/γ˙ and the system shows shear thinning behaviour. In order to understand the pronounced differences between a quiescent system and a system under shear, the response to a suddenly commencing or terminating shear flow is studied. After the switch-on of the shear field the shear stress shows an overshoot, marking the transition from elastic to plastic deformation, which is connected to a super-diffusive increase of the mean squared displacement. Since the average static structure only depends on the value of the shear stress, it does not discriminate between those two regimes. The distribution of local stresses, in contrast, becomes broader as soon as the system starts flowing. After a switch-off of the shear field, these additional fluctuations are responsible for the fast decay of stresses, which occurs on a time scale 1/γ˙ . The stress decay after a switch-off in the elastic regime, on the other hand, happens on the much larger time scale of structural relaxation τα. While stresses decrease to zero after a switch-off for temperatures above the glass transition, they decay to a finite value for lower temperatures. The obtained results are important for advancing new theoretical approaches in the framework of mode-coupling theory. Furthermore, they suggest new experimental investigations on colloidal systems. The colloid-polymer mixture is studied in the context of the behaviour near the critical point of phase separation. For the MD simulations a new effective model with soft interaction potentials is introduced and its phase diagram is presented. Here, mainly the equilibrium properties of this model are characterised. While the self-diffusion constants of colloids and polymers do not change strongly when the critical point is approached, critical slowing down of interdiffusion is observed. The order parameter fluctuations can be determined through the long-wavelength limit of static structure factors. For this strongly asymmetric mixture it is shown how the relevant structure factor can be extracted by a diagonalisation of a matrix that contains the partial static structure factors. By presenting first results of this model under shear it is demonstrated that it is suitable for non-equilibrium simulations as well.
Resumo:
I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
One of the fundamental interactions in the Standard Model of particle physicsrnis the strong force, which can be formulated as a non-abelian gauge theoryrncalled Quantum Chromodynamics (QCD). rnIn the low-energy regime, where the QCD coupling becomes strong and quarksrnand gluons are confined to hadrons, a perturbativernexpansion in the coupling constant is not possible.rnHowever, the introduction of a four-dimensional Euclidean space-timernlattice allows for an textit{ab initio} treatment of QCD and provides arnpowerful tool to study the low-energy dynamics of hadrons.rnSome hadronic matrix elements of interest receive contributionsrnfrom diagrams including quark-disconnected loops, i.e. disconnected quarkrnlines from one lattice point back to the same point. The calculation of suchrnquark loops is computationally very demanding, because it requires knowledge ofrnthe all-to-all propagator. In this thesis we use stochastic sources and arnhopping parameter expansion to estimate such propagators.rnWe apply this technique to study two problems which relay crucially on therncalculation of quark-disconnected diagrams, namely the scalar form factor ofrnthe pion and the hadronic vacuum polarization contribution to the anomalousrnmagnet moment of the muon.rnThe scalar form factor of the pion describes the coupling of a charged pion torna scalar particle. We calculate the connected and the disconnected contributionrnto the scalar form factor for three different momentum transfers. The scalarrnradius of the pion is extracted from the momentum dependence of the form factor.rnThe use ofrnseveral different pion masses and lattice spacings allows for an extrapolationrnto the physical point. The chiral extrapolation is done using chiralrnperturbation theory ($chi$PT). We find that our pion mass dependence of thernscalar radius is consistent with $chi$PT at next-to-leading order.rnAdditionally, we are able to extract the low energy constant $ell_4$ from thernextrapolation, and ourrnresult is in agreement with results from other lattice determinations.rnFurthermore, our result for the scalar pion radius at the physical point isrnconsistent with a value that was extracted from $pipi$-scattering data. rnThe hadronic vacuum polarization (HVP) is the leading-order hadronicrncontribution to the anomalous magnetic moment $a_mu$ of the muon. The HVP canrnbe estimated from the correlation of two vector currents in the time-momentumrnrepresentation. We explicitly calculate the corresponding disconnectedrncontribution to the vector correlator. We find that the disconnectedrncontribution is consistent with zero within its statistical errors. This resultrncan be converted into an upper limit for the maximum contribution of therndisconnected diagram to $a_mu$ by using the expected time-dependence of therncorrelator and comparing it to the corresponding connected contribution. Wernfind the disconnected contribution to be smaller than $approx5%$ of thernconnected one. This value can be used as an estimate for a systematic errorrnthat arises from neglecting the disconnected contribution.rn