21 resultados para Phenomenological theory (Physics)

em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The presented thesis revolves around the study of thermally-responsive PNIPAAm-based hydrogels in water/based environments, as studied by Fluorescence Correlation Spectroscopy (FCS).rnThe goal of the project was the engineering of PNIPAAm gels into biosensors. Specifically, a gamma of such gels were both investigated concerning their dynamics and structure at the nanometer scale, and their performance in retaining bound bodies upon thermal collapse (which PNIPAAm undergoes upon heating above 32 ºC).rnFCS’s requirements, as a technique, match the limitations imposed by the system. Namely, the need to intimately probe a system in a solvent, which was also fragile and easy to alter. FCS, on the other hand, both requires a fluid environment to work, and is based on the observation of diffusion of fluorescents at nanomolar concentrations. FCS was applied to probe the hydrogels on the nanometer size with minimal invasivity.rnVariables in the gels were addressed in the project including crosslinking degree; structural changes during thermal collapse; behavior in different buffers; the possibility of decreasing the degree of inhomogeneity; behavior of differently sized probes; and the effectiveness of antibody functionalization upon thermal collapse.rnThe evidenced results included the heightening of structural inhomogeneities during thermal collapse and under different buffer conditions; the use of annealing to decrease the inhomogeneity degree; the use of differently sized probes to address different length scale of the gel; and the successful functionalization before and after collapse.rnThe thesis also addresses two side projects, also carried forward via FCS. One, diffusion in inverse opals, produced a predictive simulation model for diffusion of bodies in confined systems as dependent on the bodies’ size versus the characteristic sizes of the system. The other was the observation of interaction of bodies of opposite charge in a water solution, resulting in a phenomenological theory and an evaluation method for both the average residence time of the different bodies together, and their attachment likelihood.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die vorliegende Arbeit wurde durch die Erkenntnis motiviert, daß die Theorie der Intentionalität ohne eine Theorie der impliziten Intentionalität unvollständig ist. Die Anlage einer solchen Theorie gründet in der Annahme, daß die impliziten ("ergänzenden oder "mit-bewußten") Erfahrungsinhalte Inhalte intentional wirksam sind: daß sie zur "Konstitution" der intentionalen Objekte – im Sinne vom Husserl und Gurwitsch – beitragen. Die Bedingungen und Umstände dieser Wirksamkeit herauszuarbeiten, ist das Hauptziel der vorliegenden Untersuchungen. Dazu wurde (1) eine phänomenologische Theorie des impliziten Inhalts kritisch expliziert, und (2) diese anhand einiger aktueller Ansätze der analytischen Philosophie auf die Probe gestellt. Im phänomenologischen Teil der Arbeit wurden zuerst die methodologischen Voraussetzungen von Gurwitschs gestalttheoretischer Neuformulierung des Husserlschen Projekts unter Berücksichtigung der sogenannten Konstanzannahme kritisch untersucht. Weiterhin wurden Husserls Noema-Konzeption und seine Horizontlehre aus der Perspektive von Gurwitschs Feldtheorie des Bewußtseins expliziert, und in der Folge Gurwitschs dreifache Gliederung des Bewußtseinsfeldes – das Kopräsenz-Kohärenz-Relevanz-Schema – um die phänomenologischen Begriffe "Potentialität", "Typik" und "Motivation" erweitert. Die Beziehungen, die diesen Begriffen zugrunde liegen, erwiesen sich als "mehr denn bloß kontigent, aber als weniger denn logisch oder notwendig" (Mulligan). An Beispielen aus der analytischen Philosphie der Wahrnehmung (Dretske, Peacocke, Dennett, Kelly) und der Sprache (Sperber, Wilson, Searle) wurde das phänomenologische Konzept des impliziten Inhalts kritisch beurteilt und weiterentwickelt. Hierbei wurde(n) unter anderem (1) der Zusammenhang zwischen dem phänomenologischen Begriff "vorprädikativer Inhalt" und dem analytischen Begriff "nichtkonzeptueller Inhalt" aufgezeigt und (2) Kriterien für die Zuschreibung impliziter Überzeugungen in den typischen Fällen der prädikativen Intentionalität zusammengetragen und systematisiert.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we develop further the functional renormalization group (RG) approach to quantum field theory (QFT) based on the effective average action (EAA) and on the exact flow equation that it satisfies. The EAA is a generalization of the standard effective action that interpolates smoothly between the bare action for krightarrowinfty and the standard effective action rnfor krightarrow0. In this way, the problem of performing the functional integral is converted into the problem of integrating the exact flow of the EAA from the UV to the IR. The EAA formalism deals naturally with several different aspects of a QFT. One aspect is related to the discovery of non-Gaussian fixed points of the RG flow that can be used to construct continuum limits. In particular, the EAA framework is a useful setting to search for Asymptotically Safe theories, i.e. theories valid up to arbitrarily high energies. A second aspect in which the EAA reveals its usefulness are non-perturbative calculations. In fact, the exact flow that it satisfies is a valuable starting point for devising new approximation schemes. In the first part of this thesis we review and extend the formalism, in particular we derive the exact RG flow equation for the EAA and the related hierarchy of coupled flow equations for the proper-vertices. We show how standard perturbation theory emerges as a particular way to iteratively solve the flow equation, if the starting point is the bare action. Next, we explore both technical and conceptual issues by means of three different applications of the formalism, to QED, to general non-linear sigma models (NLsigmaM) and to matter fields on curved spacetimes. In the main part of this thesis we construct the EAA for non-abelian gauge theories and for quantum Einstein gravity (QEG), using the background field method to implement the coarse-graining procedure in a gauge invariant way. We propose a new truncation scheme where the EAA is expanded in powers of the curvature or field strength. Crucial to the practical use of this expansion is the development of new techniques to manage functional traces such as the algorithm proposed in this thesis. This allows to project the flow of all terms in the EAA which are analytic in the fields. As an application we show how the low energy effective action for quantum gravity emerges as the result of integrating the RG flow. In any treatment of theories with local symmetries that introduces a reference scale, the question of preserving gauge invariance along the flow emerges as predominant. In the EAA framework this problem is dealt with the use of the background field formalism. This comes at the cost of enlarging the theory space where the EAA lives to the space of functionals of both fluctuation and background fields. In this thesis, we study how the identities dictated by the symmetries are modified by the introduction of the cutoff and we study so called bimetric truncations of the EAA that contain both fluctuation and background couplings. In particular, we confirm the existence of a non-Gaussian fixed point for QEG, that is at the heart of the Asymptotic Safety scenario in quantum gravity; in the enlarged bimetric theory space where the running of the cosmological constant and of Newton's constant is influenced by fluctuation couplings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we investigate the phenomenology of supersymmetric particles at hadron colliders beyond next-to-leading order (NLO) in perturbation theory. We discuss the foundations of Soft-Collinear Effective Theory (SCET) and, in particular, we explicitly construct the SCET Lagrangian for QCD. As an example, we discuss factorization and resummation for the Drell-Yan process in SCET. We use techniques from SCET to improve existing calculations of the production cross sections for slepton-pair production and top-squark-pair production at hadron colliders. As a first application, we implement soft-gluon resummation at next-to-next-to-next-to-leading logarithmic order (NNNLL) for slepton-pair production in the minimal supersymmetric extension of the Standard Model (MSSM). This approach resums large logarithmic corrections arising from the dynamical enhancement of the partonic threshold region caused by steeply falling parton luminosities. We evaluate the resummed invariant-mass distribution and total cross section for slepton-pair production at the Tevatron and LHC and we match these results, in the threshold region, onto NLO fixed-order calculations. As a second application we present the most precise predictions available for top-squark-pair production total cross sections at the LHC. These results are based on approximate NNLO formulas in fixed-order perturbation theory, which completely determine the coefficients multiplying the singular plus distributions. The analysis of the threshold region is carried out in pair invariant mass (PIM) kinematics and in single-particle inclusive (1PI) kinematics. We then match our results in the threshold region onto the exact fixed-order NLO results and perform a detailed numerical analysis of the total cross section.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a coarse grained model for computer simulations of lipid mixtures, which we use to study generic mechanisms for the formation of nanoscale membrane structures (lipid rafts). We observe that even a two component system can separate into rafts of finite size, and we study these rafts and other membrane structures in detail. We look at the characteristics of our model that enable these phenomena and how they may relate to lipid-cholesterol or lipid-lipid mixtures. We propose an explanation for our findings using elastic theory to describe a possible mechanism of raft stabilization via curvature differences between coexisting lipid phases and we investigate whether this theory can be used to explain the results of our computer simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In der vorliegenden Dissertation werden zwei verschiedene Aspekte des Sektors ungerader innerer Parität der mesonischen chiralen Störungstheorie (mesonische ChPT) untersucht. Als erstes wird die Ein-Schleifen-Renormierung des führenden Terms, der sog. Wess-Zumino-Witten-Wirkung, durchgeführt. Dazu muß zunächst der gesamte Ein-Schleifen-Anteil der Theorie mittels Sattelpunkt-Methode extrahiert werden. Im Anschluß isoliert man alle singulären Ein-Schleifen-Strukturen im Rahmen der Heat-Kernel-Technik. Zu guter Letzt müssen diese divergenten Anteile absorbiert werden. Dazu benötigt man eine allgemeinste anomale Lagrange-Dichte der Ordnung O(p^6), welche systematisch entwickelt wird. Erweitert man die chirale Gruppe SU(n)_L x SU(n)_R auf SU(n)_L x SU(n)_R x U(1)_V, so kommen zusätzliche Monome ins Spiel. Die renormierten Koeffizienten dieser Lagrange-Dichte, die Niederenergiekonstanten (LECs), sind zunächst freie Parameter der Theorie, die individuell fixiert werden müssen. Unter Betrachtung eines komplementären vektormesonischen Modells können die Amplituden geeigneter Prozesse bestimmt und durch Vergleich mit den Ergebnissen der mesonischen ChPT eine numerische Abschätzung einiger LECs vorgenommen werden. Im zweiten Teil wird eine konsistente Ein-Schleifen-Rechnung für den anomalen Prozeß (virtuelles) Photon + geladenes Kaon -> geladenes Kaon + neutrales Pion durchgeführt. Zur Kontrolle unserer Resultate wird eine bereits vorhandene Rechnung zur Reaktion (virtuelles) Photon + geladenes Pion -> geladenes Pion + neutrales Pion reproduziert. Unter Einbeziehung der abgeschätzten Werte der jeweiligen LECs können die zugehörigen hadronischen Strukturfunktionen numerisch bestimmt und diskutiert werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo simulations are used to study the effect of confinement on a crystal of point particles interacting with an inverse power law potential in d=2 dimensions. This system can describe colloidal particles at the air-water interface, a model system for experimental study of two-dimensional melting. It is shown that the state of the system (a strip of width D) depends very sensitively on the precise boundary conditions at the two ``walls'' providing the confinement. If one uses a corrugated boundary commensurate with the order of the bulk triangular crystalline structure, both orientational order and positional order is enhanced, and such surface-induced order persists near the boundaries also at temperatures where the system in the bulk is in its fluid state. However, using smooth repulsive boundaries as walls providing the confinement, only the orientational order is enhanced, but positional (quasi-) long range order is destroyed: The mean-square displacement of two particles n lattice parameters apart in the y-direction along the walls then crosses over from the logarithmic increase (characteristic for $d=2$) to a linear increase (characteristic for d=1). The strip then exhibits a vanishing shear modulus. These results are interpreted in terms of a phenomenological harmonic theory. Also the effect of incommensurability of the strip width D with the triangular lattice structure is discussed, and a comparison with surface effects on phase transitions in simple Ising- and XY-models is made

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with calculations in manifestly Lorentz-invariant baryon chiral perturbation theory beyond order D=4. We investigate two different methods. The first approach consists of the inclusion of additional particles besides pions and nucleons as explicit degrees of freedom. This results in the resummation of an infinite number of higher-order terms which contribute to higher-order low-energy constants in the standard formulation. In this thesis the nucleon axial, induced pseudoscalar, and pion-nucleon form factors are investigated. They are first calculated in the standard approach up to order D=4. Next, the inclusion of the axial-vector meson a_1(1260) is considered. We find three diagrams with an axial-vector meson which are relevant to the form factors. Due to the applied renormalization scheme, however, the contributions of the two loop diagrams vanish and only a tree diagram contributes explicitly. The appearing coupling constant is fitted to experimental data of the axial form factor. The inclusion of the axial-vector meson results in an improved description of the axial form factor for higher values of momentum transfer. The contributions to the induced pseudoscalar form factor, however, are negligible for the considered momentum transfer, and the axial-vector meson does not contribute to the pion-nucleon form factor. The second method consists in the explicit calculation of higher-order diagrams. This thesis describes the applied renormalization scheme and shows that all symmetries and the power counting are preserved. As an application we determine the nucleon mass up to order D=6 which includes the evaluation of two-loop diagrams. This is the first complete calculation in manifestly Lorentz-invariant baryon chiral perturbation theory at the two-loop level. The numerical contributions of the terms of order D=5 and D=6 are estimated, and we investigate their pion-mass dependence. Furthermore, the higher-order terms of the nucleon sigma term are determined with the help of the Feynman-Hellmann theorem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, a systematic analysis of the bar B to X_sgamma photon spectrum in the endpoint region is presented. The endpoint region refers to a kinematic configuration of the final state, in which the photon has a large energy m_b-2E_gamma = O(Lambda_QCD), while the jet has a large energy but small invariant mass. Using methods of soft-collinear effective theory and heavy-quark effective theory, it is shown that the spectrum can be factorized into hard, jet, and soft functions, each encoding the dynamics at a certain scale. The relevant scales in the endpoint region are the heavy-quark mass m_b, the hadronic energy scale Lambda_QCD and an intermediate scale sqrt{Lambda_QCD m_b} associated with the invariant mass of the jet. It is found that the factorization formula contains two different types of contributions, distinguishable by the space-time structure of the underlying diagrams. On the one hand, there are the direct photon contributions which correspond to diagrams with the photon emitted directly from the weak vertex. The resolved photon contributions on the other hand arise at O(1/m_b) whenever the photon couples to light partons. In this work, these contributions will be explicitly defined in terms of convolutions of jet functions with subleading shape functions. While the direct photon contributions can be expressed in terms of a local operator product expansion, when the photon spectrum is integrated over a range larger than the endpoint region, the resolved photon contributions always remain non-local. Thus, they are responsible for a non-perturbative uncertainty on the partonic predictions. In this thesis, the effect of these uncertainties is estimated in two different phenomenological contexts. First, the hadronic uncertainties in the bar B to X_sgamma branching fraction, defined with a cut E_gamma > 1.6 GeV are discussed. It is found, that the resolved photon contributions give rise to an irreducible theory uncertainty of approximately 5 %. As a second application of the formalism, the influence of the long-distance effects on the direct CP asymmetry will be considered. It will be shown that these effects are dominant in the Standard Model and that a range of -0.6 < A_CP^SM < 2.8 % is possible for the asymmetry, if resolved photon contributions are taken into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work contains several applications of the mode-coupling theory (MCT) and is separated into three parts. In the first part we investigate the liquid-glass transition of hard spheres for dimensions d→∞ analytically and numerically up to d=800 in the framework of MCT. We find that the critical packing fraction ϕc(d) scales as d²2^(-d), which is larger than the Kauzmann packing fraction ϕK(d) found by a small-cage expansion by Parisi and Zamponi [J. Stat. Mech.: Theory Exp. 2006, P03017 (2006)]. The scaling of the critical packing fraction is different from the relation ϕc(d)∼d2^(-d) found earlier by Kirkpatrick and Wolynes [Phys. Rev. A 35, 3072 (1987)]. This is due to the fact that the k dependence of the critical collective and self nonergodicity parameters fc(k;d) and fcs(k;d) was assumed to be Gaussian in the previous theories. We show that in MCT this is not the case. Instead fc(k;d) and fcs(k;d), which become identical in the limit d→∞, converge to a non-Gaussian master function on the scale k∼d^(3/2). We find that the numerically determined value for the exponent parameter λ and therefore also the critical exponents a and b depend on the dimension d, even at the largest evaluated dimension d=800. In the second part we compare the results of a molecular-dynamics simulation of liquid Lennard-Jones argon far away from the glass transition [D. Levesque, L. Verlet, and J. Kurkijärvi, Phys. Rev. A 7, 1690 (1973)] with MCT. We show that the agreement between theory and computer simulation can be improved by taking binary collisions into account [L. Sjögren, Phys. Rev. A 22, 2866 (1980)]. We find that an empiric prefactor of the memory function of the original MCT equations leads to similar results. In the third part we derive the equations for a mode-coupling theory for the spherical components of the stress tensor. Unfortunately it turns out that they are too complex to be solved numerically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.