341 resultados para Isomorphic factorization
Resumo:
The present state of the theoretical predictions for the hadronic heavy hadron production is not quite satisfactory. The full next-to-leading order (NLO) ${cal O} (alpha_s^3)$ corrections to the hadroproduction of heavy quarks have raised the leading order (LO) ${cal O} (alpha_s^2)$ estimates but the NLO predictions are still slightly below the experimental numbers. Moreover, the theoretical NLO predictions suffer from the usual large uncertainty resulting from the freedom in the choice of renormalization and factorization scales of perturbative QCD.In this light there are hopes that a next-to-next-to-leading order (NNLO) ${cal O} (alpha_s^4)$ calculation will bring theoretical predictions even closer to the experimental data. Also, the dependence on the factorization and renormalization scales of the physical process is expected to be greatly reduced at NNLO. This would reduce the theoretical uncertainty and therefore make the comparison between theory and experiment much more significant. In this thesis I have concentrated on that part of NNLO corrections for hadronic heavy quark production where one-loop integrals contribute in the form of a loop-by-loop product. In the first part of the thesis I use dimensional regularization to calculate the ${cal O}(ep^2)$ expansion of scalar one-loop one-, two-, three- and four-point integrals. The Laurent series of the scalar integrals is needed as an input for the calculation of the one-loop matrix elements for the loop-by-loop contributions. Since each factor of the loop-by-loop product has negative powers of the dimensional regularization parameter $ep$ up to ${cal O}(ep^{-2})$, the Laurent series of the scalar integrals has to be calculated up to ${cal O}(ep^2)$. The negative powers of $ep$ are a consequence of ultraviolet and infrared/collinear (or mass ) divergences. Among the scalar integrals the four-point integrals are the most complicated. The ${cal O}(ep^2)$ expansion of the three- and four-point integrals contains in general classical polylogarithms up to ${rm Li}_4$ and $L$-functions related to multiple polylogarithms of maximal weight and depth four. All results for the scalar integrals are also available in electronic form. In the second part of the thesis I discuss the properties of the classical polylogarithms. I present the algorithms which allow one to reduce the number of the polylogarithms in an expression. I derive identities for the $L$-functions which have been intensively used in order to reduce the length of the final results for the scalar integrals. I also discuss the properties of multiple polylogarithms. I derive identities to express the $L$-functions in terms of multiple polylogarithms. In the third part I investigate the numerical efficiency of the results for the scalar integrals. The dependence of the evaluation time on the relative error is discussed. In the forth part of the thesis I present the larger part of the ${cal O}(ep^2)$ results on one-loop matrix elements in heavy flavor hadroproduction containing the full spin information. The ${cal O}(ep^2)$ terms arise as a combination of the ${cal O}(ep^2)$ results for the scalar integrals, the spin algebra and the Passarino-Veltman decomposition. The one-loop matrix elements will be needed as input in the determination of the loop-by-loop part of NNLO for the hadronic heavy flavor production.
Resumo:
In der vorliegenden Arbeit wird die Faktorisierungsmethode zur Erkennung von Gebieten mit sprunghaft abweichenden Materialparametern untersucht. Durch eine abstrakte Formulierung beweisen wir die der Methode zugrunde liegende Bildraumidentität für allgemeine reelle elliptische Probleme und deduzieren bereits bekannte und neue Anwendungen der Methode. Für das spezielle Problem, magnetische oder perfekt elektrisch leitende Objekte durch niederfrequente elektromagnetische Strahlung zu lokalisieren, zeigen wir die eindeutige Lösbarkeit des direkten Problems für hinreichend kleine Frequenzen und die Konvergenz der Lösungen gegen die der elliptischen Gleichungen der Magnetostatik. Durch Anwendung unseres allgemeinen Resultats erhalten wir die eindeutige Rekonstruierbarkeit der gesuchten Objekte aus elektromagnetischen Messungen und einen numerischen Algorithmus zur Lokalisierung der Objekte. An einem Musterproblem untersuchen wir, wie durch parabolische Differentialgleichungen beschriebene Einschlüsse in einem durch elliptische Differentialgleichungen beschriebenen Gebiet rekonstruiert werden können. Dabei beweisen wir die eindeutige Lösbarkeit des zugrunde liegenden parabolisch-elliptischen direkten Problems und erhalten durch eine Erweiterung der Faktorisierungsmethode die eindeutige Rekonstruierbarkeit der Einschlüsse sowie einen numerischen Algorithmus zur praktischen Umsetzung der Methode.
Resumo:
Il lavoro è una riflessione sugli sviluppi della nozione di definizione nel recente dibattito sull'analiticità. La rinascita di questa discussione, dopo le critiche di Quine e un conseguente primo abbandono della concezione convenzionalista carnapiana ha come conseguenza una nuova concezione epistemica dell'analiticità. Nella maggior parte dei casi le nuove teorie epistemiche, tra le quali quelle di Bob Hale e Crispin Wright (Implicit Definition and the A priori, 2001) e Paul Boghossian (Analyticity, 1997; Epistemic analyticity, a defence, 2002, Blind reasoning, 2003, Is Meaning Normative ?, 2005) presentano il comune carattere di intendere la conoscenza a priori nella forma di una definizione implicita (Paul Horwich, Stipulation, Meaning, and Apriority, 2001). Ma una seconda linea di obiezioni facenti capo dapprima a Horwich, e in seguito agli stessi Hale e Wright, mettono in evidenza rispettivamente due difficoltà per la definizione corrispondenti alle questioni dell'arroganza epistemica e dell'accettazione (o della stipulazione) di una definizione implicita. Da questo presupposto nascono diversi tentativi di risposta. Da un lato, una concezione della definizione, nella teoria di Hale e Wright, secondo la quale essa appare come un principio di astrazione, dall'altro una nozione della definizione come definizione implicita, che si richiama alla concezione di P. Boghossian. In quest'ultima, la definizione implicita è data nella forma di un condizionale linguistico (EA, 2002; BR, 2003), ottenuto mediante una fattorizzazione della teoria costruita sul modello carnapiano per i termini teorici delle teorie empiriche. Un'analisi attenta del lavoro di Rudolf Carnap (Philosophical foundations of Physics, 1966), mostra che la strategia di scomposizione rappresenta una strada possibile per una nozione di analiticità adeguata ai termini teorici. La strategia carnapiana si colloca, infatti, nell'ambito di un tentativo di elaborazione di una nozione di analiticità che tiene conto degli aspetti induttivi delle teorie empiriche
Resumo:
In this work we study localized electric potentials that have an arbitrarily high energy on some given subset of a domain and low energy on another. We show that such potentials exist for general L-infinity-conductivities (with positive infima) in almost arbitrarily shaped subregions of a domain, as long as these regions are connected to the boundary and a unique continuation principle is satisfied. From this we deduce a simple, but new, theoretical identifiability result for the famous Calderon problem with partial data. We also show how to construct such potentials numerically and use a connection with the factorization method to derive a new non-iterative algorithm for the detection of inclusions in electrical impedance tomography.
Resumo:
For the detection of hidden objects by low-frequency electromagnetic imaging the Linear Sampling Method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfills the assumptions for the fully justified variant of the Linear Sampling Method, the so-called Factorization Method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds.
Resumo:
Assuming that the heat capacity of a body is negligible outside certain inclusions the heat equation degenerates to a parabolic-elliptic interface problem. In this work we aim to detect these interfaces from thermal measurements on the surface of the body. We deduce an equivalent variational formulation for the parabolic-elliptic problem and give a new proof of the unique solvability based on Lions’s projection lemma. For the case that the heat conductivity is higher inside the inclusions, we develop an adaptation of the factorization method to this time-dependent problem. In particular this shows that the locations of the interfaces are uniquely determined by boundary measurements. The method also yields to a numerical algorithm to recover the inclusions and thus the interfaces. We demonstrate how measurement data can be simulated numerically by a coupling of a finite element method with a boundary element method, and finally we present some numerical results for the inverse problem.
Resumo:
We consider a simple (but fully three-dimensional) mathematical model for the electromagnetic exploration of buried, perfect electrically conducting objects within the soil underground. Moving an electric device parallel to the ground at constant height in order to generate a magnetic field, we measure the induced magnetic field within the device, and factor the underlying mathematics into a product of three operations which correspond to the primary excitation, some kind of reflection on the surface of the buried object(s) and the corresponding secondary excitation, respectively. Using this factorization we are able to give a justification of the so-called sampling method from inverse scattering theory for this particular set-up.
Resumo:
In this work, new tools in atmospheric pollutant sampling and analysis were applied in order to go deeper in source apportionment study. The project was developed mainly by the study of atmospheric emission sources in a suburban area influenced by a municipal solid waste incinerator (MSWI), a medium-sized coastal tourist town and a motorway. Two main research lines were followed. For what concerns the first line, the potentiality of the use of PM samplers coupled with a wind select sensor was assessed. Results showed that they may be a valid support in source apportionment studies. However, meteorological and territorial conditions could strongly affect the results. Moreover, new markers were investigated, particularly focusing on the processes of biomass burning. OC revealed a good biomass combustion process indicator, as well as all determined organic compounds. Among metals, lead and aluminium are well related to the biomass combustion. Surprisingly PM was not enriched of potassium during bonfire event. The second research line consists on the application of Positive Matrix factorization (PMF), a new statistical tool in data analysis. This new technique was applied to datasets which refer to different time resolution data. PMF application to atmospheric deposition fluxes identified six main sources affecting the area. The incinerator’s relative contribution seemed to be negligible. PMF analysis was then applied to PM2.5 collected with samplers coupled with a wind select sensor. The higher number of determined environmental indicators allowed to obtain more detailed results on the sources affecting the area. Vehicular traffic revealed the source of greatest concern for the study area. Also in this case, incinerator’s relative contribution seemed to be negligible. Finally, the application of PMF analysis to hourly aerosol data demonstrated that the higher the temporal resolution of the data was, the more the source profiles were close to the real one.
Resumo:
Wir untersuchen die numerische Lösung des inversen Streuproblems der Rekonstruktion der Form, Position und Anzahl endlich vieler perfekt leitender Objekte durch Nahfeldmessungen zeitharmonischer elektromagnetischer Wellen mit Hilfe von Metalldetektoren. Wir nehmen an, dass sich die Objekte gänzlich im unteren Halbraum eines unbeschränkten zweischichtigen Hintergrundmediums befinden. Wir nehmen weiter an, dass der obere Halbraum mit Luft und der untere Halbraum mit Erde gefüllt ist. Wir betrachten zuerst die physikalischen Grundlagen elektromagnetischer Wellen, aus denen wir zunächst ein vereinfachtes mathematisches Modell ableiten, in welchem wir direkt das elektromagnetische Feld messen. Dieses Modell erweitern wir dann um die Messung des elektromagnetischen Feldes von Sendespulen mit Hilfe von Empfangsspulen. Für das vereinfachte Modell entwickeln wir, unter Verwendung der Theorie des zugehörigen direkten Streuproblems, ein nichtiteratives Verfahren, das auf der Idee der sogenannten Faktorisierungsmethode beruht. Dieses Verfahren übertragen wir dann auf das erweiterte Modell. Wir geben einen Implementierungsvorschlag der Rekonstruktionsmethode und demonstrieren an einer Reihe numerischer Experimente die Anwendbarkeit des Verfahrens. Weiterhin untersuchen wir mehrere Abwandlungen der Methode zur Verbesserung der Rekonstruktionen und zur Verringerung der Rechenzeit.
Resumo:
A permutation is said to avoid a pattern if it does not contain any subsequence which is order-isomorphic to it. Donald Knuth, in the first volume of his celebrated book "The art of Computer Programming", observed that the permutations that can be computed (or, equivalently, sorted) by some particular data structures can be characterized in terms of pattern avoidance. In more recent years, the topic was reopened several times, while often in terms of sortable permutations rather than computable ones. The idea to sort permutations by using one of Knuth’s devices suggests to look for a deterministic procedure that decides, in linear time, if there exists a sequence of operations which is able to convert a given permutation into the identical one. In this thesis we show that, for the stack and the restricted deques, there exists an unique way to implement such a procedure. Moreover, we use these sorting procedures to create new sorting algorithms, and we prove some unexpected commutation properties between these procedures and the base step of bubblesort. We also show that the permutations that can be sorted by a combination of the base steps of bubblesort and its dual can be expressed, once again, in terms of pattern avoidance. In the final chapter we give an alternative proof of some enumerative results, in particular for the classes of permutations that can be sorted by the two restricted deques. It is well-known that the permutations that can be sorted through a restricted deque are counted by the Schrӧder numbers. In the thesis, we show how the deterministic sorting procedures yield a bijection between sortable permutations and Schrӧder paths.
Resumo:
If the generic fibre f−1(c) of a Lagrangian fibration f : X → B on a complex Poisson– variety X is smooth, compact, and connected, it is isomorphic to the compactification of a complex abelian Lie–group. For affine Lagrangian fibres it is not clear what the structure of the fibre is. Adler and van Moerbeke developed a strategy to prove that the generic fibre of a Lagrangian fibration is isomorphic to the affine part of an abelian variety.rnWe extend their strategy to verify that the generic fibre of a given Lagrangian fibration is the affine part of a (C∗)r–extension of an abelian variety. This strategy turned out to be successful for all examples we studied. Additionally we studied examples of Lagrangian fibrations that have the affine part of a ramified cyclic cover of an abelian variety as generic fibre. We obtained an embedding in a Lagrangian fibration that has the affine part of a C∗–extension of an abelian variety as generic fibre. This embedding is not an embedding in the category of Lagrangian fibrations. The C∗–quotient of the new Lagrangian fibration defines in a natural way a deformation of the cyclic quotient of the original Lagrangian fibration.
Resumo:
In this thesis we investigate the phenomenology of supersymmetric particles at hadron colliders beyond next-to-leading order (NLO) in perturbation theory. We discuss the foundations of Soft-Collinear Effective Theory (SCET) and, in particular, we explicitly construct the SCET Lagrangian for QCD. As an example, we discuss factorization and resummation for the Drell-Yan process in SCET. We use techniques from SCET to improve existing calculations of the production cross sections for slepton-pair production and top-squark-pair production at hadron colliders. As a first application, we implement soft-gluon resummation at next-to-next-to-next-to-leading logarithmic order (NNNLL) for slepton-pair production in the minimal supersymmetric extension of the Standard Model (MSSM). This approach resums large logarithmic corrections arising from the dynamical enhancement of the partonic threshold region caused by steeply falling parton luminosities. We evaluate the resummed invariant-mass distribution and total cross section for slepton-pair production at the Tevatron and LHC and we match these results, in the threshold region, onto NLO fixed-order calculations. As a second application we present the most precise predictions available for top-squark-pair production total cross sections at the LHC. These results are based on approximate NNLO formulas in fixed-order perturbation theory, which completely determine the coefficients multiplying the singular plus distributions. The analysis of the threshold region is carried out in pair invariant mass (PIM) kinematics and in single-particle inclusive (1PI) kinematics. We then match our results in the threshold region onto the exact fixed-order NLO results and perform a detailed numerical analysis of the total cross section.
Resumo:
The thesis investigates the nucleon structure probed by the electromagnetic interaction. One of the most basic observables, reflecting the electromagnetic structure of the nucleon, are the form factors, which have been studied by means of elastic electron-proton scattering with ever increasing precision for several decades. In the timelike region, corresponding with the proton-antiproton annihilation into a electron-positron pair, the present experimental information is much less accurate. However, in the near future high-precision form factor measurements are planned. About 50 years after the first pioneering measurements of the electromagnetic form factors, polarization experiments stirred up the field since the results were found to be in striking contradiction to the findings of previous form factor investigations from unpolarized measurements. Triggered by the conflicting results, a whole new field studying the influence of two-photon exchange corrections to elastic electron-proton scattering emerged, which appeared as the most likely explanation of the discrepancy. The main part of this thesis deals with theoretical studies of two-photon exchange, which is investigated particularly with regard to form factor measurements in the spacelike as well as in the timelike region. An extraction of the two-photon amplitudes in the spacelike region through a combined analysis using the results of unpolarized cross section measurements and polarization experiments is presented. Furthermore, predictions of the two-photon exchange effects on the e+p/e-p cross section ratio are given for several new experiments, which are currently ongoing. The two-photon exchange corrections are also investigated in the timelike region in the process pbar{p} -> e+ e- by means of two factorization approaches. These corrections are found to be smaller than those obtained for the spacelike scattering process. The influence of the two-photon exchange corrections on cross section measurements as well as asymmetries, which allow a direct access of the two-photon exchange contribution, is discussed. Furthermore, one of the factorization approaches is applied for investigating the two-boson exchange effects in parity-violating electron-proton scattering. In the last part of the underlying work, the process pbar{p} -> pi0 e+e- is analyzed with the aim of determining the form factors in the so-called unphysical, timelike region below the two-nucleon production threshold. For this purpose, a phenomenological model is used, which provides a good description of the available data of the real photoproduction process pbar{p} -> pi0 gamma.
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
Die vorliegende Arbeit behandelt Vorwärts- sowie Rückwärtstheorie transienter Wirbelstromprobleme. Transiente Anregungsströme induzieren elektromagnetische Felder, welche sogenannte Wirbelströme in leitfähigen Objekten erzeugen. Im Falle von sich langsam ändernden Feldern kann diese Wechselwirkung durch die Wirbelstromgleichung, einer Approximation an die Maxwell-Gleichungen, beschrieben werden. Diese ist eine lineare partielle Differentialgleichung mit nicht-glatten Koeffizientenfunktionen von gemischt parabolisch-elliptischem Typ. Das Vorwärtsproblem besteht darin, zu gegebener Anregung sowie den umgebungsbeschreibenden Koeffizientenfunktionen das elektrische Feld als distributionelle Lösung der Gleichung zu bestimmen. Umgekehrt können die Felder mit Messspulen gemessen werden. Das Ziel des Rückwärtsproblems ist es, aus diesen Messungen Informationen über leitfähige Objekte, also über die Koeffizientenfunktion, die diese beschreibt, zu gewinnen. In dieser Arbeit wird eine variationelle Lösungstheorie vorgestellt und die Wohlgestelltheit der Gleichung diskutiert. Darauf aufbauend wird das Verhalten der Lösung für verschwindende Leitfähigkeit studiert und die Linearisierbarkeit der Gleichung ohne leitfähiges Objekt in Richtung des Auftauchens eines leitfähigen Objektes gezeigt. Zur Regularisierung der Gleichung werden Modifikationen vorgeschlagen, welche ein voll parabolisches bzw. elliptisches Problem liefern. Diese werden verifiziert, indem die Konvergenz der Lösungen gezeigt wird. Zuletzt wird gezeigt, dass unter der Annahme von sonst homogenen Umgebungsparametern leitfähige Objekte eindeutig durch die Messungen lokalisiert werden können. Hierzu werden die Linear Sampling Methode sowie die Faktorisierungsmethode angewendet.