19 resultados para EFFECTIVE-MASS THEORY
Resumo:
Within this work, a particle-polymer surface system is studied with respect to the particle-surface interactions. The latter are governed by micromechanics and are an important aspect for a wide range of industrial applications. Here, a new methodology is developed for understanding the adhesion process and measure the relevant forces, based on the quartz crystal microbalance, QCM. rnThe potential of the QCM technique for studying particle-surface interactions and reflect the adhesion process is evaluated by carrying out experiments with a custom-made setup, consisting of the QCM with a 160 nm thick film of polystyrene (PS) spin-coated onto the quartz and of glass particles, of different diameters (5-20µm), deposited onto the polymer surface. Shifts in the QCM resonance frequency are monitored as a function of the oscillation amplitude. The induced frequency shifts of the 3rd overtone are found to decrease or increase, depending on the particle-surface coupling type and the applied oscillation (frequency and amplitude). For strong coupling the 3rd harmonic decreased, corresponding to an “added mass” on the quartz surface. However, positive frequency shifts are observed in some cases and are attributed to weak-coupling between particle and surface. Higher overtones, i.e. the 5th and 7th, were utilized in order to derive additional information about the interactions taking place. For small particles, the shift for specific overtones can increase after annealing, while for large particle diameters annealing causes a negative frequency shift. The lower overtones correspond to a generally strong-coupling regime with mainly negative frequency shifts observed, while the 7th appears to be sensitive to the contact break-down and the recorded shifts are positive.rnDuring oscillation, the motion of the particles and the induced frequency shift of the QCM are governed by a balance between inertial forces and contact forces. The adherence of the particles can be increased by annealing the PS film at 150°C, which led to the formation of a PS meniscus. For the interpretation, the Hertz, Johnson-Kendall-Roberts, Derjaguin-Müller-Toporov and the Mindlin theory of partial slip are considered. The Mindlin approach is utilized to describe partial slip. When partial slip takes place induced by an oscillating load, a part of the contact ruptures. This results in a decrease of the effective contact stiffness. Additionally, there are long-term memory effects due to the consolidation which along with the QCM vibrations induce a coupling increase. However, the latter can also break the contact, lead to detachment and even surface damage and deformation due to inertia. For strong coupling the particles appear to move with the vibrations and simply act as added effective mass leading to a decrease of the resonance frequency, in agreement with the Sauerbrey equation that is commonly used to calculate the added mass on a QCM). When the system enters the weak-coupling regime the particles are not able to follow the fast movement of the QCM surface. Hence, they effectively act as adding a “spring” with an additional coupling constant and increase the resonance frequency. The frequency shift, however, is not a unique function of the coupling constant. Furthermore, the critical oscillation amplitude is determined, above which particle detach. No movement is detected at much lower amplitudes, while for intermediate values, lateral particle displacement is observed. rnIn order to validate the QCM results and study the particle effects on the surface, atomic force microscopy, AFM, is additionally utilized, to image surfaces and measure surface forces. By studying the surface of the polymer film after excitation and particle removal, AFM imaging helped in detecting three different meniscus types for the contact area: the “full contact”, the “asymmetrical” and a third one including a “homocentric smaller meniscus”. The different meniscus forms result in varying bond intensity between particles and polymer film, which could explain the deviation between number of particles per surface area measured by imaging and the values provided by the QCM - frequency shift analysis. The asymmetric and the homocentric contact types are suggested to be responsible for the positive frequency shifts observed for all three measured overtones, i.e. for the weak-coupling regime, while the “full contact” type resulted in a negative frequency shift, by effectively contributing to the mass increase of the quartz..rnThe interplay between inertia and contact forces for the particle-surface system leads to strong- or weak-coupling, with the particle affecting in three mentioned ways the polymer surface. This is manifested in the frequency shifts of the QCM system harmonics which are used to differentiate between the two interaction types and reflect the overall state of adhesion for particles of different size.rn
Resumo:
Der erste Teil der vorliegenden Dissertation befasst sich mit der Untersuchung der perturbativen Unitarität im Komplexe-Masse-Renormierungsschema (CMS). Zu diesem Zweck wird eine Methode zur Berechnung der Imaginärteile von Einschleifenintegralen mit komplexen Massenparametern vorgestellt, die im Grenzfall stabiler Teilchen auf die herkömmlichen Cutkosky-Formeln führt. Anhand einer Modell-Lagrangedichte für die Wechselwirkung eines schweren Vektorbosons mit einem leichten Fermion wird demonstriert, dass durch Anwendung des CMS die Unitarität der zugrunde liegenden S-Matrix im störungstheoretischen Sinne erfüllt bleibt, sofern die renormierte Kopplungskonstante reell gewählt wird. Der zweite Teil der Arbeit beschäftigt sich mit verschiedenen Anwendungen des CMS in chiraler effektiver Feldtheorie (EFT). Im Einzelnen werden Masse und Breite der Deltaresonanz, die elastischen elektromagnetischen Formfaktoren der Roperresonanz, die elektromagnetischen Formfaktoren des Übergangs vom Nukleon zur Roperresonanz sowie Pion-Nukleon-Streuung und Photo- und Elektropionproduktion für Schwerpunktsenergien im Bereich der Roperresonanz berechnet. Die Wahl passender Renormierungsbedingungen ermöglicht das Aufstellen eines konsistenten chiralen Zählschemas für EFT in Anwesenheit verschiedener resonanter Freiheitsgrade, so dass die aufgeführten Prozesse in Form einer systematischen Entwicklung nach kleinen Parametern untersucht werden können. Die hier erzielten Resultate können für Extrapolationen von entsprechenden Gitter-QCD-Simulationen zum physikalischen Wert der Pionmasse genutzt werden. Deshalb wird neben der Abhängigkeit der Formfaktoren vom quadrierten Impulsübertrag auch die Pionmassenabhängigkeit des magnetischen Moments und der elektromagnetischen Radien der Roperresonanz untersucht. Im Rahmen der Pion-Nukleon-Streuung und der Photo- und Elektropionproduktion werden eine Partialwellenanalyse und eine Multipolzerlegung durchgeführt, wobei die P11-Partialwelle sowie die Multipole M1- und S1- mittels nichtlinearer Regression an empirische Daten angepasst werden.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit der Modellierung niederenergetischer elektromagnetischer und hadronischer Prozesse im Rahmen einer manifest lorentzinvarianten, chiralen effektiven Feldtheorie unter expliziter, dynamischer Berücksichtigung resonanter, das heißt vektormesonischer Freiheitsgrade. Diese effektive Theorie kann daher als Approximation der grundlegenden Quantenchromodynamik bei kleinen Energien verstanden werden. Besonderes Augenmerk wird dabei auf das verwendete Zähl- sowie Renormierungschema gelegt, wodurch eine konsistente Beschreibung mesonischer Prozesse bis zu Energien von etwa 1GeV ermöglicht wird. Das verwendete Zählschema beruht dabei im Wesentlichen auf einem Argument für großes N_c (Anzahl der Farbfreiheitsgrade) und lässt eine äquivalente Behandlung von Goldstonebosonen (Pionen) und Resonanzen (Rho- und Omegamesonen) zu. Als Renormierungsschema wird das für (bezüglich der starken Wechselwirkung) instabile Teilchen besonders geeignete complex-mass scheme als Erweiterung des extended on-mass-shell scheme verwendet, welches in Kombination mit dem BPHZ-Renormierungsverfahren (benannt nach Bogoliubov, Parasiuk, Hepp und Zimmermann) ein leistungsfähiges Konzept zur Berechnung von Quantenkorrekturen in dieser chiralen effektiven Feldtheorie darstellt. Sämtliche vorgenommenen Rechnungen schließen Terme der chiralen Ordnung vier sowie einfache Schleifen in Feynman-Diagrammen ein. Betrachtet werden unter anderem der Vektorformfaktor des Pions im zeitartigen Bereich, die reelle Compton-Streuung (beziehungsweise Photonenfusion) im neutralen und geladenen Kanal sowie die virtuelle Compton-Streuung, eingebettet in die Elektron-Positron-Annihilation. Zur Extraktion der Niederenergiekopplungskonstanten der Theorie wird letztendlich eine Reihe experimenteller Datensätze verschiedenartiger Observablen verwendet. Die hier entwickelten Methoden und Prozeduren - und insbesondere deren technische Implementierung - sind sehr allgemeiner Natur und können daher auch an weitere Problemstellungen aus diesem Gebiet der niederenergetischen Quantenchromodynamik angepasst werden.
Resumo:
A nanostructured thin film is a thin material layer, usually supported by a (solid) substrate, which possesses subdomains with characteristic nanoscale dimensions (10 ~ 100 nm) that are differentiated by their material properties. Such films have captured vast research interest because the dimensions and the morphology of the nanostructure introduce new possibilities to manipulating chemical and physical properties not found in bulk materials. Block copolymer (BCP) self-assembly, and anodization to form nanoporous anodic aluminium oxide (AAO), are two different methods for generating nanostructures by self-organization. Using poly(styrene-block-methyl methacrylate) (PS-b-PMMA) nanopatterned thin films, it is demonstrated that these polymer nanopatterns can be used to study the influence of nanoscale features on protein-surface interactions. Moreover, a method for the directed assembly of adsorbed protein nanoarrays, based on the nanoscale juxtaposition of the BCP surface domains, is also demonstrated. Studies on protein-nanopattern interactions may inform the design of biomaterials, biosensors, and relevant cell-surface experiments that make use of nanoscale structures. In addition, PS-b-PMMA and AAO thin films are also demonstrated for use as optical waveguides at visible wavelengths. Due to the sub-wavelength nature of the nanostructures, scattering losses are minimized, and the optical response is amenable to analysis with effective medium theory (EMT). Optical waveguide measurements and EMT analysis of the films’ optical anisotropy enabled the in situ characterization of the PS-b-PMMA nanostructure, and a variety of surface processes within the nanoporous AAO involving (bio)macromolecules at high sensitivity.
Resumo:
In der Archäologie werden elektrische Widerstandsmessungen routinemäßig zur Prospektion von Fundstellen eingesetzt. Die Methode ist kostengünstig, leicht anwendbar und liefert in den meisten Fällen zuverlässige und leicht zu interpretierende Ergebnisse. Dennoch kann die Methode die archäologischen Strukturen in manchen Fällen nur teilweise oder gar nicht abbilden, wenn die bodenphysikalischen und bodenchemischen Eigenschaften des Bodens und der archäologischen Strukturen dies nicht zulassen. Der spezifische elektrische Widerstand wird durch Parameter wie Wassergehalt, Bodenstruktur, Bodenskelett, Bodentextur, Salinität und Bodentemperatur beeinflusst. Manche dieser Parameter, wie z.B. der Wassergehalt und die Bodentemperatur, unterliegen einer saisonalen Veränderung. Die vorliegende Arbeit untersucht den spezifischen elektrischen Widerstand von archäologischen Steinstrukturen und evaluiert die Möglichkeit, auf Grundlage von Geländemessungen und Laboranalysen archäologische Strukturen und Böden als numerische Modelle darzustellen. Dazu wurde eine Kombination von verschiedenen bodenkundlichen, geoarchäologischen und geophysikalischen Methoden verwendet. Um archäologische Strukturen und Bodenprofile als numerische Widerstandsmodelle darstellen zu können, werden Informationen zur Geometrie der Strukturen und ihren elektrischen Widerstandswerten benötigt. Dabei ist die Qualität der Hintergrundinformationen entscheidend für die Genauigkeit des Widerstandsmodells. Die Geometrie der Widerstandsmodelle basiert auf den Ergebnissen von Rammkernsondierungen und archäologische Ausgrabungen. Die an der Ausbildung des elektrischen Widerstands beteiligten Parameter wurden durch die Analyse von Bodenproben gemessen und ermöglichen durch Pedotransfer-Funktion, wie die Rhoades-Formel, die Abschätzung des spezifischen elektrischen Widerstandes des Feinbodens. Um den Einfluss des Bodenskeletts auf den spezifischen elektrischen Widerstand von Bodenprofilen und archäologischen Strukturen zu berechnen, kamen die Perkolationstheorie und die Effective Medium Theory zum Einsatz. Die Genauigkeit und eventuelle Limitierungen der Methoden wurden im Labor durch experimentelle Widerstandsmessungen an ungestörten Bodenproben und synthetischen Materialien überprüft. Die saisonale Veränderung des Wassergehalts im Boden wurde durch numerische Modelle mit der Software HYDRUS simuliert. Die hydraulischen Modelle wurden auf Grundlage der ermittelten bodenkundlichen und archäologischen Stratigraphie erstellt und verwenden die Daten von lokalen Wetterstationen als Eingangsparameter. Durch die Kombination der HYDRUS-Ergebnisse mit den Pedotransfer-Funktionen konnte der Einfluss dieser saisonalen Veränderung auf die Prospektionsergebnisse von elektrischen Widerstandsmethoden berechnet werden. Die Ergebnisse der Modellierungsprozesse wurden mit den Geländemessungen verglichen. Die beste Übereinstimmung zwischen Modellergebnissen und den Prospektionsergebnissen konnte für die Fallstudie bei Katzenbach festgestellt werden. Bei dieser wurden die Modelle auf Grundlage von archäologischen Grabungsergebnissen und detaillierten bodenkundlichen Analysen erstellt. Weitere Fallstudien zeigen, dass elektrische Widerstandsmodelle eingesetzt werden können, um den Einfluss von ungünstigen Prospektionsbedingungen auf die Ergebnisse der elektrischen Widerstandsmessungen abzuschätzen. Diese Informationen unterstützen die Planung und Anwendung der Methoden im Gelände und ermöglichen eine effektivere Interpretation der Prospektionsergebnisse. Die präsentierten Modellierungsansätze benötigen eine weitere Verifizierung durch den Vergleich der Modellierungsergebnisse mit detailliertem geophysikalischem Gelände-Monitoring von archäologischen Fundstellen. Zusätzlich könnten elektrische Widerstandsmessungen an künstlichen Mauerstrukturen unter kontrollierten Bedingungen zur Überprüfung der Modellierungsprozesse genutzt werden.
Resumo:
The most important property controlling the physicochemical behaviour of polyelectrolytes and their applicability in different fields is the charge density on the macromolecular chain. A polyelectrolyte molecule in solution may have an effective charge density which is smaller than the actual charge density determined from its chemical structure. In the present work an attempt has been made to quantitatively determine this effective charge density of a model polyelectrolyte by using light scattering techniques. Flexible linear polyelectrolytes with a Poly(2-Vinylpyridine) (2-PVP) backbone are used in the present study. The polyelectrolytes are synthesized by quaternizing the pyridine groups of 2-PVP by ethyl bromide to different quaternization degrees. The effect of the molar mass, degree of quaternization and solvent polarity on the effective charge is studied. The results show that the effective charge does not vary much with the polymer molar mass or the degree of quaternization. But a significant increase in the effective charge is observed when the solvent polarity is increased. The results do not obey the counterion condensation theory proposed by Manning. Based on the very low effective charges determined in this study, a new mechanism for the counterion condensation phenomena from a specific polyelectrolyte-counterion interaction is proposed
Resumo:
Es gibt kaum eine präzisere Beschreibung der Natur als die durch das Standardmodell der Elementarteilchen (SM). Es ist in der Lage bis auf wenige Ausnahmen, die Physik der Materie- und Austauschfelder zu beschreiben. Dennoch ist man interessiert an einer umfassenderen Theorie, die beispielsweise auch die Gravitation mit einbezieht, Neutrinooszillationen beschreibt, und die darüber hinaus auch weitere offene Fragen klärt. Um dieser Theorie ein Stück näher zu kommen, befasst sich die vorliegende Arbeit mit einem effektiven Potenzreihenansatz zur Beschreibung der Physik des Standardmodells und neuer Phänomene. Mit Hilfe eines Massenparameters und einem Satz neuer Kopplungskonstanten wird die Neue Physik parametrisiert. In niedrigster Ordnung erhält man das bekannte SM, Terme höherer Ordnung in der Kopplungskonstanten beschreiben die Effekte jenseits des SMs. Aus gewissen Symmetrie-Anforderungen heraus ergibt sich eine definierte Anzahl von effektiven Operatoren mit Massendimension sechs, die den hier vorgestellten Rechnungen zugrunde liegen. Wir berechnen zunächst für eine bestimmte Auswahl von Prozessen zugehörige Zerfallsbreiten bzw. Wirkungsquerschnitte in einem Modell, welches das SM um einen einzigen neuen effektiven Operator erweitertet. Unter der Annahme, dass der zusätzliche Beitrag zur Observablen innerhalb des experimentellen Messfehlers ist, geben wir anhand von vorliegenden experimentellen Ergebnissen aus leptonischen und semileptonischen Präzisionsmessungen Ausschlussgrenzen der neuen Kopplungen in Abhängigkeit von dem Massenparameter an. Die hier angeführten Resultate versetzen Physiker zum Einen in die Lage zu beurteilen, bei welchen gemessenen Observablen eine Erhöhung der Präzision sinnvoll ist, um bessere Ausschlussgrenzen angeben zu können. Zum anderen erhält man einen Anhaltspunkt, welche Prozesse im Hinblick auf Entdeckungen Neuer Physik interessant sind.
Resumo:
This thesis is concerned with calculations in manifestly Lorentz-invariant baryon chiral perturbation theory beyond order D=4. We investigate two different methods. The first approach consists of the inclusion of additional particles besides pions and nucleons as explicit degrees of freedom. This results in the resummation of an infinite number of higher-order terms which contribute to higher-order low-energy constants in the standard formulation. In this thesis the nucleon axial, induced pseudoscalar, and pion-nucleon form factors are investigated. They are first calculated in the standard approach up to order D=4. Next, the inclusion of the axial-vector meson a_1(1260) is considered. We find three diagrams with an axial-vector meson which are relevant to the form factors. Due to the applied renormalization scheme, however, the contributions of the two loop diagrams vanish and only a tree diagram contributes explicitly. The appearing coupling constant is fitted to experimental data of the axial form factor. The inclusion of the axial-vector meson results in an improved description of the axial form factor for higher values of momentum transfer. The contributions to the induced pseudoscalar form factor, however, are negligible for the considered momentum transfer, and the axial-vector meson does not contribute to the pion-nucleon form factor. The second method consists in the explicit calculation of higher-order diagrams. This thesis describes the applied renormalization scheme and shows that all symmetries and the power counting are preserved. As an application we determine the nucleon mass up to order D=6 which includes the evaluation of two-loop diagrams. This is the first complete calculation in manifestly Lorentz-invariant baryon chiral perturbation theory at the two-loop level. The numerical contributions of the terms of order D=5 and D=6 are estimated, and we investigate their pion-mass dependence. Furthermore, the higher-order terms of the nucleon sigma term are determined with the help of the Feynman-Hellmann theorem.
Resumo:
This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.
Resumo:
Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
In this thesis, a systematic analysis of the bar B to X_sgamma photon spectrum in the endpoint region is presented. The endpoint region refers to a kinematic configuration of the final state, in which the photon has a large energy m_b-2E_gamma = O(Lambda_QCD), while the jet has a large energy but small invariant mass. Using methods of soft-collinear effective theory and heavy-quark effective theory, it is shown that the spectrum can be factorized into hard, jet, and soft functions, each encoding the dynamics at a certain scale. The relevant scales in the endpoint region are the heavy-quark mass m_b, the hadronic energy scale Lambda_QCD and an intermediate scale sqrt{Lambda_QCD m_b} associated with the invariant mass of the jet. It is found that the factorization formula contains two different types of contributions, distinguishable by the space-time structure of the underlying diagrams. On the one hand, there are the direct photon contributions which correspond to diagrams with the photon emitted directly from the weak vertex. The resolved photon contributions on the other hand arise at O(1/m_b) whenever the photon couples to light partons. In this work, these contributions will be explicitly defined in terms of convolutions of jet functions with subleading shape functions. While the direct photon contributions can be expressed in terms of a local operator product expansion, when the photon spectrum is integrated over a range larger than the endpoint region, the resolved photon contributions always remain non-local. Thus, they are responsible for a non-perturbative uncertainty on the partonic predictions. In this thesis, the effect of these uncertainties is estimated in two different phenomenological contexts. First, the hadronic uncertainties in the bar B to X_sgamma branching fraction, defined with a cut E_gamma > 1.6 GeV are discussed. It is found, that the resolved photon contributions give rise to an irreducible theory uncertainty of approximately 5 %. As a second application of the formalism, the influence of the long-distance effects on the direct CP asymmetry will be considered. It will be shown that these effects are dominant in the Standard Model and that a range of -0.6 < A_CP^SM < 2.8 % is possible for the asymmetry, if resolved photon contributions are taken into account.
Electroweak precision observables and effective four-fermion interactions in warped extra dimensions
Resumo:
In this thesis, we study the phenomenology of selected observables in the context of the Randall-Sundrum scenario of a compactified warpedrnextra dimension. Gauge and matter fields are assumed to live in the whole five-dimensional space-time, while the Higgs sector is rnlocalized on the infrared boundary. An effective four-dimensional description is obtained via Kaluza-Klein decomposition of the five dimensionalrnquantum fields. The symmetry breaking effects due to the Higgs sector are treated exactly, and the decomposition of the theory is performedrnin a covariant way. We develop a formalism, which allows for a straight-forward generalization to scenarios with an extended gauge group comparedrnto the Standard Model of elementary particle physics. As an application, we study the so-called custodial Randall-Sundrum model and compare the resultsrnto that of the original formulation. rnWe present predictions for electroweak precision observables, the Higgs production cross section at the LHC, the forward-backward asymmetryrnin top-antitop production at the Tevatron, as well as the width difference, the CP-violating phase, and the semileptonic CP asymmetry in B_s decays.
Resumo:
In this thesis we investigate several phenomenologically important properties of top-quark pair production at hadron colliders. We calculate double differential cross sections in two different kinematical setups, pair invariant-mass (PIM) and single-particle inclusive (1PI) kinematics. In pair invariant-mass kinematics we are able to present results for the double differential cross section with respect to the invariant mass of the top-quark pair and the top-quark scattering angle. Working in the threshold region, where the pair invariant mass M is close to the partonic center-of-mass energy sqrt{hat{s}}, we are able to factorize the partonic cross section into different energy regions. We use renormalization-group (RG) methods to resum large threshold logarithms to next-to-next-to-leading-logarithmic (NNLL) accuracy. On a technical level this is done using effective field theories, such as heavy-quark effective theory (HQET) and soft-collinear effective theory (SCET). The same techniques are applied when working in 1PI kinematics, leading to a calculation of the double differential cross section with respect to transverse-momentum pT and the rapidity of the top quark. We restrict the phase-space such that only soft emission of gluons is possible, and perform a NNLL resummation of threshold logarithms. The obtained analytical expressions enable us to precisely predict several observables, and a substantial part of this thesis is devoted to their detailed phenomenological analysis. Matching our results in the threshold regions to the exact ones at next-to-leading order (NLO) in fixed-order perturbation theory, allows us to make predictions at NLO+NNLL order in RG-improved, and at approximate next-to-next-to-leading order (NNLO) in fixed order perturbation theory. We give numerical results for the invariant mass distribution of the top-quark pair, and for the top-quark transverse-momentum and rapidity spectrum. We predict the total cross section, separately for both kinematics. Using these results, we analyze subleading contributions to the total cross section in 1PI and PIM originating from power corrections to the leading terms in the threshold expansions, and compare them to previous approaches. We later combine our PIM and 1PI results for the total cross section, this way eliminating uncertainties due to these corrections. The combined predictions for the total cross section are presented as a function of the top-quark mass in the pole, the minimal-subtraction (MS), and the 1S mass scheme. In addition, we calculate the forward-backward (FB) asymmetry at the Tevatron in the laboratory, and in the ttbar rest frames as a function of the rapidity and the invariant mass of the top-quark pair at NLO+NNLL. We also give binned results for the asymmetry as a function of the invariant mass and the rapidity difference of the ttbar pair, and compare those to recent measurements. As a last application we calculate the charge asymmetry at the LHC as a function of a lower rapidity cut-off for the top and anti-top quarks.
Resumo:
Primary varicella-zoster virus (VZV) infection during childhood leads to varicella commonly known as chickenpox. After primary infection has occurred VZV establishes latency in the host. During subsequent lifetime the virus can cause reactivated infection clinically known as herpes zoster or shingles. In immunodeficient patients’ dissemination of the virus can lead to life-threatening disease. Withdrawal of acyclovir drug prophylaxis puts allogeneic hematopoietic stem-cell transplantation (HSCT) patients at increased risk for herpes zoster as long as VZV-specific cellular immunity is impaired. Although an efficient live attenuated VZV vaccine for zoster prophylaxis exists, it is not approved in immunocompromised patients due to safety reasons. Knowledge of immunogenic VZV proteins would allow designing a noninfectious nonhazardous subunit vaccine suitable for patients with immunodeficiencies. The objective of this study was to identify T cell defined virus proteins of a VZV-infected Vero cell extract that we have recently described as a reliable antigen format for interferon-gamma (IFN-γ) enzyme-linked immunosorbent spot (ELISpot) assays (Distler et al. 2008). We first separated the VZV-infected/-uninfected Vero cell extracts by size filtration and reverse-phase high performance liquid chromatography (RP-HPLC). The collected fractions were screened for VZV reactivity with peripheral blood mononuclear cells (PBMCs) of VZV-seropositive healthy individuals in the sensitive IFN-γ ELISpot assay. Using this strategy, we successfully identified bioactive fractions that contained immunogenic VZV material. VZV immune reactivity was mediated by CD4+ memory T lymphocytes (T cells) of VZV-seropositive healthy individuals as demonstrated in experiments with HLA blockade antibodies and T cell subpopulations already published by Distler et al. We next analyzed the bioactive fractions with electrospray ionization mass spectrometry (ESI-MS) techniques and identified the sequences of three VZV-derived proteins: glycoprotein E (gE); glycoprotein B (gB), and immediate early protein 62 (IE62). Complementary DNA of these identified proteins was used to generate in vitro transcribed RNA for effective expression in PBMCs by electroporation. We thereby established a reliable and convenient IFN-γ ELISPOT approach to screen PBMCs of healthy donors and HSCT patients for T cell reactivity to single full-length VZV proteins. Application in 10 VZV seropositive healthy donors demonstrated much stronger recognition of glycoproteins gE and gB compared to IE62. In addition, monitoring experiments with ex vivo PBMCs of 3 allo-HSCT patients detected strongly increased CD4+ T cell responses to gE and gB for several weeks to months after zoster onset, while IE62 reactivity remained moderate. Overall our results show for the first time that VZV glycoproteins gE and gB are major targets of the post-transplant anti-zoster CD4+ T cell response. The screening approach introduced herein may help to select VZV proteins recognized by memory CD4+ T cells for inclusion in a subunit vaccine, which can be safely used for zoster prophylaxis in immunocompromised HSCT patients.