1000 resultados para diffusives viskoelastisches Modell, globale schwache Lösung, Fehlerabschätzung
Resumo:
Precision measurements of phenomena related to fermion mixing require the inclusion of higher order corrections in the calculation of corresponding theoretical predictions. For this, a complete renormalization scheme for models that allow for fermion mixing is highly required. The correct treatment of unstable particles makes this task difficult and yet, no satisfactory and general solution can be found in the literature. In the present work, we study the renormalization of the fermion Lagrange density with Dirac and Majorana particles in models that involve mixing. The first part of the thesis provides a general renormalization prescription for the Lagrangian, while the second one is an application to specific models. In a general framework, using the on-shell renormalization scheme, we identify the physical mass and the decay width of a fermion from its full propagator. The so-called wave function renormalization constants are determined such that the subtracted propagator is diagonal on-shell. As a consequence of absorptive parts in the self-energy, the constants that are supposed to renormalize the incoming fermion and the outgoing antifermion are different from the ones that should renormalize the outgoing fermion and the incoming antifermion and not related by hermiticity, as desired. Instead of defining field renormalization constants identical to the wave function renormalization ones, we differentiate the two by a set of finite constants. Using the additional freedom offered by this finite difference, we investigate the possibility of defining field renormalization constants related by hermiticity. We show that for Dirac fermions, unless the model has very special features, the hermiticity condition leads to ill-defined matrix elements due to self-energy corrections of external legs. In the case of Majorana fermions, the constraints for the model are less restrictive. Here one might have a better chance to define field renormalization constants related by hermiticity. After analysing the complete renormalized Lagrangian in a general theory including vector and scalar bosons with arbitrary renormalizable interactions, we consider two specific models: quark mixing in the electroweak Standard Model and mixing of Majorana neutrinos in the seesaw mechanism. A counter term for fermion mixing matrices can not be fixed by only taking into account self-energy corrections or fermion field renormalization constants. The presence of unstable particles in the theory can lead to a non-unitary renormalized mixing matrix or to a gauge parameter dependence in its counter term. Therefore, we propose to determine the mixing matrix counter term by fixing the complete correction terms for a physical process to experimental measurements. As an example, we calculate the decay rate of a top quark and of a heavy neutrino. We provide in each of the chosen models sample calculations that can be easily extended to other theories.
Resumo:
L’elaborato, dopo una ricostruzione della disciplina normativa presente in materia di contratto a tempo determinato in Italia e nei principali ordinamenti europei (Spagna, Francia ed Inghilterra), affronta i più rilevanti nodi problematici dell’istituto, in riferimento al settore privato e pubblico, mettendo in luce le principali querelle dottrinali e giurisprudenziali. Particolare attenzione è dedicata alle questioni insorte a seguito delle ultime modifiche normative di cui al c.d. Collegato lavoro (legge n. 183/2010), sino al decisivo intervento della Corte Costituzionale, con pronuncia n. 303 del 9 novembre 2011, che ha dichiarato legittima la disposizione introduttiva dell’indennità risarcitoria forfetizzata, aggiuntiva rispetto alla conversione del contratto. Tutte le problematiche trattate hanno evidenziato le difficoltà per le Corti Superiori, così come per i giudici comunitari e nazionali, di trovare una linea univoca e condivisa nella risoluzione delle controversie presenti in materia. L’elaborato si chiude con alcune riflessioni sui temi della flessibilità e precarietà nel mondo del lavoro, attraverso una valutazione quantitativa e qualitativa dell’istituto, nell’intento di fornire una risposta ad alcuni interrogativi: la flessibilità è necessariamente precarietà o può essere letta quale forma speciale di occupazione? Quali sono i possibili antidoti alla precarietà? In conclusione, è emerso come la flessibilità possa rappresentare un problema per le imprese e per i lavoratori soltanto nel lungo periodo. La soluzione è stata individuata nell’opportunità di investire sulla formazione. Si è così ipotizzata una nuova «flessibilità socialmente ed economicamente sostenibile», da realizzarsi tramite l’ausilio delle Regioni e, quindi, dei contributi del Fondo europeo di sviluppo regionale: al lavoratore, in tal modo, potrà essere garantita la continuità con il lavoro tramite percorsi formativi mirati e, d’altro canto, il datore di lavoro non dovrà farsi carico dei costi per la formazione dei dipendenti a tempo determinato.
Resumo:
In dieser Arbeit werden Quantum-Hydrodynamische (QHD) Modelle betrachtet, die ihren Einsatz besonders in der Modellierung von Halbleiterbauteilen finden. Das QHD Modell besteht aus den Erhaltungsgleichungen für die Teilchendichte, das Momentum und die Energiedichte, inklusive der Quanten-Korrekturen durch das Bohmsche Potential. Zu Beginn wird eine Übersicht über die bekannten Ergebnisse der QHD Modelle unter Vernachlässigung von Kollisionseffekten gegeben, die aus einem Schrödinger-System für den gemischten-Zustand oder aus der Wigner-Gleichung hergeleitet werden können. Nach der Reformulierung der eindimensionalen QHD Gleichungen mit linearem Potential als stationäre Schrödinger-Gleichung werden die semianalytischen Fassungen der QHD Gleichungen für die Gleichspannungs-Kurve betrachtet. Weiterhin werden die viskosen Stabilisierungen des QHD Modells berücksichtigt, sowie die von Gardner vorgeschlagene numerische Viskosität für das {sf upwind} Finite-Differenzen Schema berechnet. Im Weiteren wird das viskose QHD Modell aus der Wigner-Gleichung mit Fokker-Planck Kollisions-Operator hergeleitet. Dieses Modell enthält die physikalische Viskosität, die durch den Kollision-Operator eingeführt wird. Die Existenz der Lösungen (mit strikt positiver Teilchendichte) für das isotherme, stationäre, eindimensionale, viskose Modell für allgemeine Daten und nichthomogene Randbedingungen wird gezeigt. Die dafür notwendigen Abschätzungen hängen von der Viskosität ab und erlauben daher den Grenzübergang zum nicht-viskosen Fall nicht. Numerische Simulationen der Resonanz-Tunneldiode modelliert mit dem nichtisothermen, stationären, eindimensionalen, viskosen QHD Modell zeigen den Einfluss der Viskosität auf die Lösung. Unter Verwendung des von Degond und Ringhofer entwickelten Quanten-Entropie-Minimierungs-Verfahren werden die allgemeinen QHD-Gleichungen aus der Wigner-Boltzmann-Gleichung mit dem BGK-Kollisions-Operator hergeleitet. Die Herleitung basiert auf der vorsichtige Entwicklung des Quanten-Maxwellians in Potenzen der skalierten Plankschen Konstante. Das so erhaltene Modell enthält auch vertex-Terme und dispersive Terme für die Geschwindigkeit. Dadurch bleibt die Gleichspannungs-Kurve für die Resonanz-Tunneldiode unter Verwendung des allgemeinen QHD Modells in einer Dimension numerisch erhalten. Die Ergebnisse zeigen, dass der dispersive Geschwindigkeits-Term die Lösung des Systems stabilisiert.
Resumo:
Ziel der vorliegenden Arbeit war es, den Einfluss von regulatorischen T-Zellen (Treg) auf die Pathogenese des Asthmas in einem murinen Modell zu untersuchen. Es konnte gezeigt werden, dass die Co-Expression von TGF-ß1 und IL-10 auf Treg notwendig ist, um im Tiermodell vor einer Atemwegs-Hyperreagibilität (AHR) zu schützen. Natürliche Treg konnten keinen Schutz vermitteln. Weiterhin wurde gezeigt, dass der Schutz vor AHR durch TGF-ß1 über Empfänger T-Zellen vermittelt wird. Dabei reichte die alleinige Anwesenheit von TGF-ß1 nicht aus, vielmehr musste das Zytokin von Treg exprimiert werden. Ein Einfluss von TGF-ß1 überexprimierenden Treg auf die peribronchiale Entzündung konnte nicht festgestellt werden, wohingegen adoptiver Transfer von natürlichen Treg die Eosinophilen Anzahl in der Bronchiallavage signifikant verringern konnte. Dabei korrelierte die Eosinophilie mit den IL-5 Spiegeln in der Bronchiallavage. In dieser Arbeit konnte also eine Entkopplung der Mechanismen von AHR und Entzündung festgestellt werden. Die weitere Aufklärung der Mechanismen der Suppression der AHR durch TGF-ß1 und IL-10 produzierende Treg könnte daher die Entwicklung neuer therapeutischer Ansätze bei Atemwegserkrankungen ermöglichen.
Resumo:
Die Wirksamkeit einer Vakzine ist von vielen Parametern abhängig. Dazu gehören unter anderen: das ausgewählte Antigen, die Formulation in der das Antigen benutzt wird sowie die Applikationsroute. Antigen-kodierende Ribonukleinsäuren (RNA) gilt heutzutage als eine sichere und effiziente Alternative zu traditionellen Impfstoff-Formulierungen, wie Peptiden, rekombinanten Proteinen, viralen Systemen oder DNA basierten Impfstoffen. Bezüglich des Applikationsortes repräsentiert der Lymphknoten ein optimales Milieu für die Interaktion zwischen antigenpräsentierenden Zellen und T-Zellen. Vor diesem Hintergrund war die Zielsetzung dieser Arbeit, ein auf direktem in vivo Transfer von Antigen-kodierender in vitro transkribierter RNA (IVT-RNA) basierendes Impfverfahren zu entwickeln, zu charakterisieren und auf seine anti-tumorale Wirksamkeit zu testen. In der vorliegenden Arbeit konnte gezeigt werden, dass dendritische Zellen (DCs) in vitro hocheffizient mit IVT-RNA transfiziert werden können und eine hohe stimulatorische Kapazität besitzen. Durch Sequenzmodifikation der IVT-RNA konnten wir die Transkriptstabilität und Translationseffizienz erhöhen was zu einer Steigerung der stimulatorischen Kapazität in vivo führte. Darüber hinaus untersuchten wir die Auswirkung der Insertion eines Signalpeptides 5’ sowie einer C-terminalen transmembran- und zytosolischen-Domäne eines MHC-Klasse-I-Moleküls am 3’ der Antigen-kodierenden Sequenz auf die Effizienz der MHC-Klasse-I und -II Präsentation. Wir konnten in vitro und in vivo nachweisen, dass diese Modifikation zu einer gesteigerten, simultanen Stimulation von antigenspezifischen CD4+ und CD8+ T-Zellen führt. Auf der Basis der optimierten Vektorkassetten etablierten wir die intranodale (i.n.) Transfektion von antigenpräsentierenden Zellen in der Maus. Dazu nutzten wir verschiedene Reportersysteme (eGFP-RNA, fluoreszensmarkierte RNA) und konnten zeigen, dass die intranodale Applikation von IVT-RNA zu selektiven Transfektion und Maturation lymphknotenresidenter DCs führt. Zur Untersuchung der immunologischen Effekte wurden in erster Linie auf Influenza-Hemagglutinin-A und Ovalbumin basierende Modellantigensysteme verwendet. Beide Antigene wurden als Antigen-MHC-Fusionskonstrukte genutzt. Als Responderzellen wurden TCR-transgene Lymphozyten verwendet, die MHC-Klasse-I oder -Klasse-II restringierte Epitope des Influenza-Hemagglutinin-A bzw. des Ovalbumin-Proteins erkennen. Wir konnten in vivo zeigen, dass die intranodale Immunisierung mit IVT-RNA zu einer effizienten Stimulation und Expansion von antigenspezifischen CD4+ und CD8+ T-Zellen in einer dosisabhängigen Weise führt. Funktionell konnte gezeigt werden, dass diese T-Zellen Zytokine sezernieren und zur Zytolyse befähigt sind. Wir waren in der Lage durch repetitive i.n. RNA Immunisierung ein ‚Priming’ CD8+ T-Zellen in naiven Mäusen sowohl gegen virale als auch gegen Tumor assoziierte Antigene zu erreichen. Die geprimten T-Zellen waren befähigt eine zytolytische Aktivität gegen mit spezifischem Peptid beladene Targetzellen zu generieren. Darüber hinaus waren wir in der Lage Gedächtnisszellen expandieren zu können. Abschließend konnten wir in Tumormodellen sowohl in prophylaktischen als auch in therapeutischen Experimenten zeigen dass die i.n. RNA Vakzination die Potenz zur Induktion einer anti-tumoralen Immunität besitzt.
Resumo:
Computer simulations play an ever growing role for the development of automotive products. Assembly simulation, as well as many other processes, are used systematically even before the first physical prototype of a vehicle is built in order to check whether particular components can be assembled easily or whether another part is in the way. Usually, this kind of simulation is limited to rigid bodies. However, a vehicle contains a multitude of flexible parts of various types: cables, hoses, carpets, seat surfaces, insulations, weatherstrips... Since most of the problems using these simulations concern one-dimensional components and since an intuitive tool for cable routing is still needed, we have chosen to concentrate on this category, which includes cables, hoses and wiring harnesses. In this thesis, we present a system for simulating one dimensional flexible parts such as cables or hoses. The modeling of bending and torsion follows the Cosserat model. For this purpose we use a generalized spring-mass system and describe its configuration by a carefully chosen set of coordinates. Gravity and contact forces as well as the forces responsible for length conservation are expressed in Cartesian coordinates. But bending and torsion effects can be dealt with more effectively by using quaternions to represent the orientation of the segments joining two neighboring mass points. This augmented system allows an easy formulation of all interactions with the best appropriate coordinate type and yields a strongly banded Hessian matrix. An energy minimizing process accounts for a solution exempt from the oscillations that are typical of spring-mass systems. The use of integral forces, similar to an integral controller, allows to enforce exactly the constraints. The whole system is numerically stable and can be solved at interactive frame rates. It is integrated in the DaimlerChrysler in-house Virtual Reality Software veo for use in applications such as cable routing and assembly simulation and has been well received by users. Parts of this work have been published at the ACM Solid and Physical Modeling Conference 2006 and have been selected for the special issue of the Computer-Aided-Design Journal to the conference.
Resumo:
In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
In this work, we consider a simple model problem for the electromagnetic exploration of small perfectly conducting objects buried within the lower halfspace of an unbounded two–layered background medium. In possible applications, such as, e.g., humanitarian demining, the two layers would correspond to air and soil. Moving a set of electric devices parallel to the surface of ground to generate a time–harmonic field, the induced field is measured within the same devices. The goal is to retrieve information about buried scatterers from these data. In mathematical terms, we are concerned with the analysis and numerical solution of the inverse scattering problem to reconstruct the number and the positions of a collection of finitely many small perfectly conducting scatterers buried within the lower halfspace of an unbounded two–layered background medium from near field measurements of time–harmonic electromagnetic waves. For this purpose, we first study the corresponding direct scattering problem in detail and derive an asymptotic expansion of the scattered field as the size of the scatterers tends to zero. Then, we use this expansion to justify a noniterative MUSIC–type reconstruction method for the solution of the inverse scattering problem. We propose a numerical implementation of this reconstruction method and provide a series of numerical experiments.
Resumo:
The purpose of this doctoral thesis is to prove existence for a mutually catalytic random walk with infinite branching rate on countably many sites. The process is defined as a weak limit of an approximating family of processes. An approximating process is constructed by adding jumps to a deterministic migration on an equidistant time grid. As law of jumps we need to choose the invariant probability measure of the mutually catalytic random walk with a finite branching rate in the recurrent regime. This model was introduced by Dawson and Perkins (1998) and this thesis relies heavily on their work. Due to the properties of this invariant distribution, which is in fact the exit distribution of planar Brownian motion from the first quadrant, it is possible to establish a martingale problem for the weak limit of any convergent sequence of approximating processes. We can prove a duality relation for the solution to the mentioned martingale problem, which goes back to Mytnik (1996) in the case of finite rate branching, and this duality gives rise to weak uniqueness for the solution to the martingale problem. Using standard arguments we can show that this solution is in fact a Feller process and it has the strong Markov property. For the case of only one site we prove that the model we have constructed is the limit of finite rate mutually catalytic branching processes as the branching rate approaches infinity. Therefore, it seems naturalto refer to the above model as an infinite rate branching process. However, a result for convergence on infinitely many sites remains open.
Resumo:
DNA block copolymer, a new class of hybrid material composed of a synthetic polymer and an oligodeoxynucleotide segment, owns unique properties which can not be achieved by only one of the two polymers. Among amphiphilic DNA block copolymers, DNA-b-polypropylene oxide (PPO) was chosen as a model system, because PPO is biocompatible and has a Tg < 0 °C. Both properties might be essential for future applications in living systems. During my PhD study, I focused on the properties and the structures of DNA-b-PPO molecules. First, DNA-b-PPO micelles were studied by scanning force microscopy (SFM) and fluorescence correlation spectroscopy (FCS). In order to control the size of micelles without re-synthesis, micelles were incubated with template-independent DNA polymerase TdT and deoxynucleotide triphosphates in reaction buffer solution. By carrying out ex-situ experiments, the growth of micelles was visualized by imaging in liquid with AFM. Complementary measurements with FCS and polyacrylamide gel electrophoresis (PAGE) confirmed the increase in size. Furthermore, the growing process was studied with AFM in-situ at 37 °C. Hereby the growth of individual micelles could be observed. In contrast to ex-situ reactions, the growth of micelles adsorbed on mica surface for in-situ experiments terminated about one hour after the reaction was initiated. Two reasons were identified for the termination: (i) block of catalytic sites by interaction with the substrate and (ii) reduced exchange of molecules between micelles and the liquid environment. In addition, a geometrical model for AFM imaging was developed which allowed deriving the average number of mononucleotides added to DNA-b-PPO molecules in dependence on the enzymatic reaction time (chapter 3). Second, a prototype of a macroscopic DNA machine made of DNA-b-PPO was investigated. As DNA-b-PPO molecules were amphiphilic, they could form a monolayer at the air-water interface. Using a Langmuir film balance, the energy released owing to DNA hybridization was converted into macroscopic movements of the barriers in the Langmuir trough. A specially adapted Langmuir trough was build to exchange the subphase without changing the water level significantly. Upon exchanging the subphase with complementary DNA containing buffer solution, an increase of lateral pressure was observed which could be attributed to hybridization of single stranded DNA-b-PPO. The pressure versus area/molecule isotherms were recorded before and after hybridization. I also carried out a series of control experiments, in order to identify the best conditions of realizing a DNA machine with DNA-b-PPO. To relate the lateral pressure with molecular structures, Langmuir Blodgett (LB) films were transferred to highly ordered pyrolytic graphite (HOPG) and mica substrates at different pressures. These films were then investigated with AFM (chapter 4). At last, this thesis includes studies of DNA and DNA block copolymer assemblies with AFM, which were performed in cooperation with different group of the Sonderforschungsbereich 625 “From Single Molecules to Nanoscopically Structured Materials”. AFM was proven to be an important method to confirm the formation of multiblock copolymers and DNA networks (chapter 5).
Resumo:
Wir untersuchen die Mathematik endlicher, an ein Wärmebad gekoppelter Teilchensysteme. Das Standard-Modell der Quantenelektrodynamik für Temperatur Null liefert einen Hamilton-Operator H, der die Energie von Teilchen beschreibt, welche mit Photonen wechselwirken. Im Heisenbergbild ist die Zeitevolution des physikalischen Systems durch die Wirkung einer Ein-Parameter-Gruppe auf eine Menge von Observablen A gegeben: Diese steht im Zusammenhang mit der Lösung der Schrödinger-Gleichung für H. Um Zustände von A, welche das physikalische System in der Nähe des thermischen Gleichgewichts zur Temperatur T darstellen, zu beschreiben, folgen wir dem Ansatz von Jaksic und Pillet, eine Darstellung von A zu konstruieren. Die Vektoren in dieser Darstellung definieren die Zustände, die Zeitentwicklung wird mit Hilfe des Standard Liouville-Operators L beschrieben. In dieser Doktorarbeit werden folgende Resultate bewiesen bzw. hergeleitet: - die Konstuktion einer Darstellung - die Selbstadjungiertheit des Standard Liouville-Operators - die Existenz eines Gleichgewichtszustandes in dieser Darstellung - der Limes des physikalischen Systems für große Zeiten.
Resumo:
The present-day climate in the Mediterranean region is characterized by mild, wet winters and hot, dry summers. There is contradictory evidence as to whether the present-day conditions (“Mediterranean climate”) already existed in the Late Miocene. This thesis presents seasonally-resolved isotope and element proxy data obtained from Late Miocene reef corals from Crete (Southern Aegean, Eastern Mediterranean) in order to illustrate climate conditions in the Mediterranean region during this time. There was a transition from greenhouse to icehouse conditions without a Greenland ice sheet during the Late Miocene. Since the Greenland ice sheet is predicted to melt fully within the next millennia, Late Miocene climate mechanisms can be considered as useful analogues in evaluating models of Northern Hemispheric climate conditions in the future. So far, high resolution chemical proxy data on Late Miocene environments are limited. In order to enlarge the proxy database for this time span, coral genus Tarbellastraea was evaluated as a new proxy archive, and proved reliable based on consistent oxygen isotope records of Tarbellastraea and the established paleoenvironmental archive of coral genus Porites. In combination with lithostratigraphic data, global 87Sr/86Sr seawater chronostratigraphy was used to constrain the numerical age of the coral sites, assuming the Mediterranean Sea to be equilibrated with global open ocean water. 87Sr/86Sr ratios of Tarbellastraea and Porites from eight stratigraphically different sampling sites were measured by thermal ionization mass spectrometry. The ratios range from 0.708900 to 0.708958 corresponding to ages of 10 to 7 Ma (Tortonian to Early Messinian). Spectral analyses of multi-decadal time-series yield interannual δ18O variability with periods of ~2 and ~5 years, similar to that of modern records, indicating that pressure field systems comparable to those controlling the seasonality of present-day Mediterranean climate existed, at least intermittently, already during the Late Miocene. In addition to sea surface temperature (SST), δ18O composition of coral aragonite is controlled by other parameters such as local seawater composition which as a result of precipitation and evaporation, influences sea surface salinity (SSS). The Sr/Ca ratio is considered to be independent of salinity, and was used, therefore, as an additional proxy to estimate seasonality in SST. Major and trace element concentrations in coral aragonite determined by laser ablation inductively coupled plasma mass spectrometry yield significant variations along a transect perpendicular to coral growth increments, and record varying environmental conditions. The comparison between the average SST seasonality of 7°C and 9°C, derived from average annual δ18O (1.1‰) and Sr/Ca (0.579 mmol/mol) amplitudes, respectively, indicates that the δ18O-derived SST seasonality is biased by seawater composition, reducing the δ18O amplitude by 0.3‰. This value is equivalent to a seasonal SSS variation of 1‰, as observed under present-day Aegean Sea conditions. Concentration patterns of non-lattice bound major and trace elements, related to trapped particles within the coral skeleton, reflect seasonal input of suspended load into the reef environment. δ18O, Sr/Ca and non-lattice bound element proxy records, as well as geochemical compositions of the trapped particles, provide evidence for intense precipitation in the Eastern Mediterranean during winters. Winter rain caused freshwater discharge and transport of weathering products from the hinterland into the reef environment. There is a trend in coral δ18O data to more positive mean δ18O values (–2.7‰ to –1.7‰) coupled with decreased seasonal δ18O amplitudes (1.1‰ to 0.7‰) from 10 to 7 Ma. This relationship is most easily explained in terms of more positive summer δ18O. Since coral diversity and annual growth rates indicate more or less constant average SST for the Mediterranean from the Tortonian to the Early Messinian, more positive mean and summer δ18O indicate increasing aridity during the Late Miocene, and more pronounced during summers. The analytical results implicate that winter rainfall and summer drought, the main characteristics of the present-day Mediterranean climate, were already present in the Mediterranean region during the Late Miocene. Some models have argued that the Mediterranean climate did not exist in this region prior to the Pliocene. However, the data presented here show that conditions comparable to those of the present-day existed either intermittently or permanently since at least about 10 Ma.
Resumo:
Aromatische Amide mit p-Verknüpfung bilden die wohl steifste und härteste Klasse organischer Moleküle. Ihre Oligomere und Polymere sind Materialien mit extremer Stabilität und chemischer Robustheit. Die vorliegende Arbeit beschreibt die Synthese wohldefinierter Oligo-(p-benzamid)e (OPBA) bis zum Hepta-(p-benzamid), deren Kristallstruktur und thermisches Verhalten eingehend untersucht werden. Ihre besondere Steifigkeit wird im Folgenden genutzt, um Stab-Knäuel-Copolymere mit wohldefiniertem OPBA-Stab-Block herzustellen. Das Aggregationsverhalten dieser Copolymere wird näher beschrieben und die Aggregate mittels Rasterkraftmikroskopie (RKM) visualisiert und charakterisiert. Ein Schwerpunkt der durchgeführten Forschung befasst sich mit dem Einflu"s chemischer Variationen von Knäuel- und Stabblock auf die Aggregation. Ausgehend von PEG-OPBA-Copolymeren wird gezeigt, wie sich über kontrolliert radikalische Polymerisation responsive Triblöcke herstellen lassen. Das Verhalten dieser Triblöcke in wässriger Lösung wird eingehender untersucht und anhand von Lichstreu- und RKM-Untersuchungen ein Modell entwickelt, welches dieses Verhalten beschreibt. Neben den OPBA beschäftigt sich die Arbeit mit der Synthese wohldefinierter Oligo-p-phenylen-terephthalamide (OPTA). Der Aufbau PEG-basierter Stab-Knäuel-Copolymere mit monodispersem OPTA-Block wird beschrieben und ihre Aggregate mittels RKM dargestellt. Die Copolymere werden verwendet, um verbesserte Haftungseigenschaften an Twaron-Fasern gegenüber reinem PEG zu demonstrieren.
Resumo:
Stylolites are rough paired surfaces, indicative of localized stress-induced dissolution under a non-hydrostatic state of stress, separated by a clay parting which is believed to be the residuum of the dissolved rock. These structures are the most frequent deformation pattern in monomineralic rocks and thus provide important information about low temperature deformation and mass transfer. The intriguing roughness of stylolites can be used to assess amount of volume loss and paleo-stress directions, and to infer the destabilizing processes during pressure solution. But there is little agreement on how stylolites form and why these localized pressure solution patterns develop their characteristic roughness.rnNatural bedding parallel and vertical stylolites were studied in this work to obtain a quantitative description of the stylolite roughness and understand the governing processes during their formation. Adapting scaling approaches based on fractal principles it is demonstrated that stylolites show two self affine scaling regimes with roughness exponents of 1.1 and 0.5 for small and large length scales separated by a crossover length at the millimeter scale. Analysis of stylolites from various depths proved that this crossover length is a function of the stress field during formation, as analytically predicted. For bedding parallel stylolites the crossover length is a function of the normal stress on the interface, but vertical stylolites show a clear in-plane anisotropy of the crossover length owing to the fact that the in-plane stresses (σ2 and σ3) are dissimilar. Therefore stylolite roughness contains a signature of the stress field during formation.rnTo address the origin of stylolite roughness a combined microstructural (SEM/EBSD) and numerical approach is employed. Microstructural investigations of natural stylolites in limestones reveal that heterogeneities initially present in the host rock (clay particles, quartz grains) are responsible for the formation of the distinctive stylolite roughness. A two-dimensional numerical model, i.e. a discrete linear elastic lattice spring model, is used to investigate the roughness evolving from an initially flat fluid filled interface induced by heterogeneities in the matrix. This model generates rough interfaces with the same scaling properties as natural stylolites. Furthermore two coinciding crossover phenomena in space and in time exist that separate length and timescales for which the roughening is either balanced by surface or elastic energies. The roughness and growth exponents are independent of the size, amount and the dissolution rate of the heterogeneities. This allows to conclude that the location of asperities is determined by a polimict multi-scale quenched noise, while the roughening process is governed by inherent processes i.e. the transition from a surface to an elastic energy dominated regime.rn