13 resultados para Explicit numerical method
em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
The aim of the work was to study the correlation between the orientation and excited-state lifetimes of organic dyes close to dielectric interfaces. For this purpose, an experimental setup was designed and built, guiding the light through a prism in total internal reflection geometry. Fluorescence intensities and lifetimes for an ensemble of dye molecules were analyzed as a function of the excitation and detection polarizations. Working close to the total internal reflection angle, the differences between polarization combinations were enhanced. A classical electromagnetic model that assumes a chromophore as a couple of point-like electrical dipoles was developed. A numerical method to calculate the excitation and emission of dye molecules embedded in a multilayer system was implemented, by which full simulation of the time resolved fluorescence experiments was achieved. Free organic dyes and organic dyes covalently bound to polyelectrolyte chains were used. The polymer functionalization process avoided aggregation and provided control over the dyes position, within a few nanometers to the interface. Moreover, by varying the pH, the polymer chains could be deposited on different substrates with different conformations and the resulting fluorescence characteristics analyzed. Initially the fluorescence of organic dyes embedded in a polymer matrix was studied as a function of the distance between the fluorophores and the polymer-air interface. The non-radiative decay rate, vacuum decay rate and the relative angle between the excitation and emission dipoles of the chromophores could be determined. Different free organic dyes were deposited onto different dielectric spacers, as close as possible to the air-dielectric interface. Surprisingly, the fluorescence characteristics of dyes deposited onto polyelectrolyte layer were in good agreement with theoretical predictions of dyes in a polymer matrix, even when the layer was only 2 nm thick. When functionalized chains were deposited at low pH, on top of a polyelectrolyte spacer, the fluorescence had the characteristics of emitters embedded in a polymer matrix as well. Surface deposition at high pH showed an intermediate behaviour between emitters embedded in polymer and on top of the surface, in air. In general, for low pH values, the chains are deposited on a substrate in a train-like conformation. For high pH values, the chains are deposited in a loop-like conformation. As a consequence at low pH the functionalized polymer strongly interdigitates with the polyelectrolyte chains of the spacer, bringing most of the dyes inside the polymer. Thus, the fluorophores may experience the polymer as surrounding environment. On the other hand, for high pH values the dye-loaded chains adsorbed have a conformational arrangement of dense loops that extend away from the surface. Therefore many fluorophores experience the air as surrounding environment. Changing the spacer from polyelectrolyte to negatively charged silane produced contradictory results for lifetimes and intensities. The fluorescence intensities indicated the behaviour of emitters embedded in a polymer matrix, regardless of the pH value. On the other hand, for low pH values, the excited-state lifetimes showed that the emitters behaved as in air. For higher pH values, an intermediate behaviour between fluorophores located within and above of a dielectric film was observed. The poor agreement between theoretical and experimental data may be due to the simplified model utilized, by which the dipoles are assumed either in one side or in the other with respect to a geometrical air-dielectric interface. In the case when the dielectric film is constituted by the functionalized polymer chains themselves, reality is more complex and a different model may apply. Nevertheless, possible applications of the technique arise from a qualitative analysis.
Resumo:
I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.
Resumo:
In vielen Teilgebieten der Mathematik ist es w"{u}nschenswert, die Monodromiegruppe einer homogenen linearen Differenzialgleichung zu verstehen. Es sind nur wenige analytische Methoden zur Berechnung dieser Gruppe bekannt, daher entwickeln wir im ersten Teil dieser Arbeit eine numerische Methode zur Approximation ihrer Erzeuger.rnIm zweiten Abschnitt fassen wir die Grundlagen der Theorie der Uniformisierung Riemannscher Fl"achen und die der arithmetischen Fuchsschen Gruppen zusammen. Auss erdem erkl"aren wir, wie unsere numerische Methode bei der Bestimmung von uniformisierenden Differenzialgleichungen dienlich sein kann. F"ur arithmetische Fuchssche Gruppen mit zwei Erzeugern erhalten wir lokale Daten und freie Parameter von Lam'{e} Gleichungen, welche die zugeh"origen Riemannschen Fl"achen uniformisieren. rnIm dritten Teil geben wir einen kurzen Abriss zur homologischen Spiegelsymmetrie und f"uhren die $widehat{Gamma}$-Klasse ein. Wir erkl"aren wie diese genutzt werden kann, um eine Hodge-theoretische Version der Spiegelsymmetrie f"ur torische Varit"aten zu beweisen. Daraus gewinnen wir Vermutungen "uber die Monodromiegruppe $M$ von Picard-Fuchs Gleichungen von gewissen Familien $f:mathcal{X}rightarrow bbp^1$ von $n$-dimensionalen Calabi-Yau Variet"aten. Diese besagen erstens, dass bez"uglich einer nat"urlichen Basis die Monodromiematrizen in $M$ Eintr"age aus dem K"orper $bbq(zeta(2j+1)/(2 pi i)^{2j+1},j=1,ldots,lfloor (n-1)/2 rfloor)$ haben. Und zweitens, dass sich topologische Invarianten des Spiegelpartners einer generischen Faser von $f:mathcal{X}rightarrow bbp^1$ aus einem speziellen Element von $M$ rekonstruieren lassen. Schliess lich benutzen wir die im ersten Teil entwickelten Methoden zur Verifizierung dieser Vermutungen, vornehmlich in Hinblick auf Dimension drei. Dar"uber hinaus erstellen wir eine Liste von Kandidaten topologischer Invarianten von vermutlich existierenden dreidimensionalen Calabi-Yau Variet"aten mit $h^{1,1}=1$.
Resumo:
In dieser Arbeit aus dem Bereich der Wenig-Nukleonen-Physik wird die neu entwickelte Methode der Lorentz Integral Transformation (LIT) auf die Untersuchung von Kernphotoabsorption und Elektronenstreuung an leichten Kernen angewendet. Die LIT-Methode ermoeglicht exakte Rechnungen durchzufuehren, ohne explizite Bestimmung der Endzustaende im Kontinuum. Das Problem wird auf die Loesung einer bindungzustandsaehnlichen Gleichung reduziert, bei der die Endzustandswechselwirkung vollstaendig beruecksichtigt wird. Die Loesung der LIT-Gleichung wird mit Hilfe einer Entwicklung nach hypersphaerischen harmonischen Funktionen durchgefuehrt, deren Konvergenz durch Anwendung einer effektiven Wechselwirkung im Rahmem des hypersphaerischen Formalismus (EIHH) beschleunigt wird. In dieser Arbeit wird die erste mikroskopische Berechnung des totalen Wirkungsquerschnittes fuer Photoabsorption unterhalb der Pionproduktionsschwelle an 6Li, 6He und 7Li vorgestellt. Die Rechnungen werden mit zentralen semirealistischen NN-Wechselwirkungen durchgefuehrt, die die Tensor Kraft teilweise simulieren, da die Bindungsenergien von Deuteron und von Drei-Teilchen-Kernen richtig reproduziert werden. Der Wirkungsquerschnitt fur Photoabsorption an 6Li zeigt nur eine Dipol-Riesenresonanz, waehrend 6He zwei unterschiedliche Piks aufweist, die dem Aufbruch vom Halo und vom Alpha-Core entsprechen. Der Vergleich mit experimentellen Daten zeigt, dass die Addition einer P-Wellen-Wechselwirkung die Uebereinstimmung wesentlich verbessert. Bei 7Li wird nur eine Dipol-Riesenresonanz gefunden, die gut mit den verfuegbaren experimentellen Daten uebereinstimmt. Bezueglich der Elektronenstreuung wird die Berechnung der longitudinalen und transversalen Antwortfunktionen von 4He im quasi-elastischen Bereich fuer mittlere Werte des Impulsuebertrages dargestellt. Fuer die Ladungs- und Stromoperatoren wird ein nichtrelativistisches Modell verwendet. Die Rechnungen sind mit semirealistischen Wechselwirkungen durchgefuert und ein eichinvarianter Strom wird durch die Einfuehrung eines Mesonaustauschstroms gewonnen. Die Wirkung des Zweiteilchenstroms auf die transversalen Antwortfunktionen wird untersucht. Vorlaeufige Ergebnisse werden gezeigt und mit den verfuegbaren experimentellen Daten verglichen.
Resumo:
This thesis is concerned with calculations in manifestly Lorentz-invariant baryon chiral perturbation theory beyond order D=4. We investigate two different methods. The first approach consists of the inclusion of additional particles besides pions and nucleons as explicit degrees of freedom. This results in the resummation of an infinite number of higher-order terms which contribute to higher-order low-energy constants in the standard formulation. In this thesis the nucleon axial, induced pseudoscalar, and pion-nucleon form factors are investigated. They are first calculated in the standard approach up to order D=4. Next, the inclusion of the axial-vector meson a_1(1260) is considered. We find three diagrams with an axial-vector meson which are relevant to the form factors. Due to the applied renormalization scheme, however, the contributions of the two loop diagrams vanish and only a tree diagram contributes explicitly. The appearing coupling constant is fitted to experimental data of the axial form factor. The inclusion of the axial-vector meson results in an improved description of the axial form factor for higher values of momentum transfer. The contributions to the induced pseudoscalar form factor, however, are negligible for the considered momentum transfer, and the axial-vector meson does not contribute to the pion-nucleon form factor. The second method consists in the explicit calculation of higher-order diagrams. This thesis describes the applied renormalization scheme and shows that all symmetries and the power counting are preserved. As an application we determine the nucleon mass up to order D=6 which includes the evaluation of two-loop diagrams. This is the first complete calculation in manifestly Lorentz-invariant baryon chiral perturbation theory at the two-loop level. The numerical contributions of the terms of order D=5 and D=6 are estimated, and we investigate their pion-mass dependence. Furthermore, the higher-order terms of the nucleon sigma term are determined with the help of the Feynman-Hellmann theorem.
Resumo:
In various imaging problems the task is to use the Cauchy data of the solutions to an elliptic boundary value problem to reconstruct the coefficients of the corresponding partial differential equation. Often the examined object has known background properties but is contaminated by inhomogeneities that cause perturbations of the coefficient functions. The factorization method of Kirsch provides a tool for locating such inclusions. In this paper, the factorization technique is studied in the framework of coercive elliptic partial differential equations of the divergence type: Earlier it has been demonstrated that the factorization algorithm can reconstruct the support of a strictly positive (or negative) definite perturbation of the leading order coefficient, or if that remains unperturbed, the support of a strictly positive (or negative) perturbation of the zeroth order coefficient. In this work we show that these two types of inhomogeneities can, in fact, be located simultaneously. Unlike in the earlier articles on the factorization method, our inclusions may have disconnected complements and we also weaken some other a priori assumptions of the method. Our theoretical findings are complemented by two-dimensional numerical experiments that are presented in the framework of the diffusion approximation of optical tomography.
Resumo:
In electrical impedance tomography, one tries to recover the conductivity inside a physical body from boundary measurements of current and voltage. In many practically important situations, the investigated object has known background conductivity but it is contaminated by inhomogeneities. The factorization method of Andreas Kirsch provides a tool for locating such inclusions. Earlier, it has been shown that under suitable regularity conditions positive (or negative) inhomogeneities can be characterized by the factorization technique if the conductivity or one of its higher normal derivatives jumps on the boundaries of the inclusions. In this work, we use a monotonicity argument to generalize these results: We show that the factorization method provides a characterization of an open inclusion (modulo its boundary) if each point inside the inhomogeneity has an open neighbourhood where the perturbation of the conductivity is strictly positive (or negative) definite. In particular, we do not assume any regularity of the inclusion boundary or set any conditions on the behaviour of the perturbed conductivity at the inclusion boundary. Our theoretical findings are verified by two-dimensional numerical experiments.
Resumo:
Im Rahmen dieser Arbeit wurden Computersimulationen von Keimbildungs- und Kris\-tallisationsprozessen in rnkolloidalen Systemen durchgef\"uhrt. rnEine Kombination von Monte-Carlo-Simulationsmethoden und der Forward-Flux-Sampling-Technik wurde rnimplementiert, um die homogene und heterogene Nukleation von Kristallen monodisperser Hart\-kugeln zu untersuchen. rnIm m\"a\ss{ig} unterk\"uhlten Bulk-Hartkugelsystem sagen wir die homogenen Nukleationsraten voraus und rnvergleichen die Resultate mit anderen theoretischen Ergebnissen und experimentellen Daten. rnWeiterhin analysieren wir die kristallinen Cluster in den Keimbildungs- und Wachstumszonen, rnwobei sich herausstellt, dass kristalline Cluster sich in unterschiedlichen Formen im System bilden. rnKleine Cluster sind eher l\"anglich in eine beliebige Richtung ausgedehnt, w\"ahrend gr\"o\ss{ere} rnCluster kompakter und von ellipsoidaler Gestalt sind. rn rnIm n\"achsten Teil untersuchen wir die heterogene Keimbildung an strukturierten bcc (100)-W\"anden. rnDie 2d-Analyse der kristallinen Schichten an der Wand zeigt, dass die Struktur der rnWand eine entscheidende Rolle in der Kristallisation von Hartkugelkolloiden spielt. rnWir sagen zudem die heterogenen Kristallbildungsraten bei verschiedenen \"Ubers\"attigungsgraden voraus. rnDurch Analyse der gr\"o\ss{ten} Cluster an der Wand sch\"atzen wir zus\"atzlich den Kontaktwinkel rnzwischen Kristallcluster und Wand ab. rnEs stellt sich heraus, dass wir in solchen Systemen weit von der Benetzungsregion rnentfernt sind und der Kristallisationsprozess durch heterogene Nukleation stattfindet. rn rnIm letzten Teil der Arbeit betrachten wir die Kristallisation von Lennard-Jones-Kolloidsystemen rnzwischen zwei ebenen W\"anden. rnUm die Erstarrungsprozesse f\"ur ein solches System zu untersuchen, haben wir eine Analyse des rnOrdnungsparameters f\"ur die Bindung-Ausrichtung in den Schichten durchgef\"urt. rnDie Ergebnisse zeigen, dass innerhalb einer Schicht keine hexatische Ordnung besteht, rnwelche auf einen Kosterlitz-Thouless-Schmelzvorgang hinweisen w\"urde. rnDie Hysterese in den Erhitzungs-Gefrier\-kurven zeigt dar\"uber hinaus, dass der Kristallisationsprozess rneinen aktivierten Prozess darstellt.
Resumo:
The interplay of hydrodynamic and electrostatic forces is of great importance for the understanding of colloidal dispersions. Theoretical descriptions are often based on the so called standard electrokinetic model. This Mean Field approach combines the Stokes equation for the hydrodynamic flow field, the Poisson equation for electrostatics and a continuity equation describing the evolution of the ion concentration fields. In the first part of this thesis a new lattice method is presented in order to efficiently solve the set of non-linear equations for a charge-stabilized colloidal dispersion in the presence of an external electric field. Within this framework, the research is mainly focused on the calculation of the electrophoretic mobility. Since this transport coefficient is independent of the electric field only for small driving, the algorithm is based upon a linearization of the governing equations. The zeroth order is the well known Poisson-Boltzmann theory and the first order is a coupled set of linear equations. Furthermore, this set of equations is divided into several subproblems. A specialized solver for each subproblem is developed, and various tests and applications are discussed for every particular method. Finally, all solvers are combined in an iterative procedure and applied to several interesting questions, for example, the effect of the screening mechanism on the electrophoretic mobility or the charge dependence of the field-induced dipole moment and ion clouds surrounding a weakly charged sphere. In the second part a quantitative data analysis method is developed for a new experimental approach, known as "Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy" (TIR-FCCS). The TIR-FCCS setup is an optical method using fluorescent colloidal particles to analyze the flow field close to a solid-fluid interface. The interpretation of the experimental results requires a theoretical model, which is usually the solution of a convection-diffusion equation. Since an analytic solution is not available due to the form of the flow field and the boundary conditions, an alternative numerical approach is presented. It is based on stochastic methods, i. e. a combination of a Brownian Dynamics algorithm and Monte Carlo techniques. Finally, experimental measurements for a hydrophilic surface are analyzed using this new numerical approach.
Resumo:
In dieser Arbeit stelle ich Aspekte zu QCD Berechnungen vor, welche eng verknüpft sind mit der numerischen Auswertung von NLO QCD Amplituden, speziell der entsprechenden Einschleifenbeiträge, und der effizienten Berechnung von damit verbundenen Beschleunigerobservablen. Zwei Themen haben sich in der vorliegenden Arbeit dabei herauskristallisiert, welche den Hauptteil der Arbeit konstituieren. Ein großer Teil konzentriert sich dabei auf das gruppentheoretische Verhalten von Einschleifenamplituden in QCD, um einen Weg zu finden die assoziierten Farbfreiheitsgrade korrekt und effizient zu behandeln. Zu diesem Zweck wird eine neue Herangehensweise eingeführt welche benutzt werden kann, um farbgeordnete Einschleifenpartialamplituden mit mehreren Quark-Antiquark Paaren durch Shufflesummation über zyklisch geordnete primitive Einschleifenamplituden auszudrücken. Ein zweiter großer Teil konzentriert sich auf die lokale Subtraktion von zu Divergenzen führenden Poltermen in primitiven Einschleifenamplituden. Hierbei wurde im Speziellen eine Methode entwickelt, um die primitiven Einchleifenamplituden lokal zu renormieren, welche lokale UV Counterterme und effiziente rekursive Routinen benutzt. Zusammen mit geeigneten lokalen soften und kollinearen Subtraktionstermen wird die Subtraktionsmethode dadurch auf den virtuellen Teil in der Berechnung von NLO Observablen erweitert, was die voll numerische Auswertung der Einschleifenintegrale in den virtuellen Beiträgen der NLO Observablen ermöglicht. Die Methode wurde schließlich erfolgreich auf die Berechnung von NLO Jetraten in Elektron-Positron Annihilation im farbführenden Limes angewandt.
Resumo:
This thesis deals with the development of a novel simulation technique for macromolecules in electrolyte solutions, with the aim of a performance improvement over current molecular-dynamics based simulation methods. In solutions containing charged macromolecules and salt ions, it is the complex interplay of electrostatic interactions and hydrodynamics that determines the equilibrium and non-equilibrium behavior. However, the treatment of the solvent and dissolved ions makes up the major part of the computational effort. Thus an efficient modeling of both components is essential for the performance of a method. With the novel method we approach the solvent in a coarse-grained fashion and replace the explicit-ion description by a dynamic mean-field treatment. Hence we combine particle- and field-based descriptions in a hybrid method and thereby effectively solve the electrokinetic equations. The developed algorithm is tested extensively in terms of accuracy and performance, and suitable parameter sets are determined. As a first application we study charged polymer solutions (polyelectrolytes) in shear flow with focus on their viscoelastic properties. Here we also include semidilute solutions, which are computationally demanding. Secondly we study the electro-osmotic flow on superhydrophobic surfaces, where we perform a detailed comparison to theoretical predictions.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn