581 resultados para Medizinische Informatik
Resumo:
In this thesis we consider systems of finitely many particles moving on paths given by a strong Markov process and undergoing branching and reproduction at random times. The branching rate of a particle, its number of offspring and their spatial distribution are allowed to depend on the particle's position and possibly on the configuration of coexisting particles. In addition there is immigration of new particles, with the rate of immigration and the distribution of immigrants possibly depending on the configuration of pre-existing particles as well. In the first two chapters of this work, we concentrate on the case that the joint motion of particles is governed by a diffusion with interacting components. The resulting process of particle configurations was studied by E. Löcherbach (2002, 2004) and is known as a branching diffusion with immigration (BDI). Chapter 1 contains a detailed introduction of the basic model assumptions, in particular an assumption of ergodicity which guarantees that the BDI process is positive Harris recurrent with finite invariant measure on the configuration space. This object and a closely related quantity, namely the invariant occupation measure on the single-particle space, are investigated in Chapter 2 where we study the problem of the existence of Lebesgue-densities with nice regularity properties. For example, it turns out that the existence of a continuous density for the invariant measure depends on the mechanism by which newborn particles are distributed in space, namely whether branching particles reproduce at their death position or their offspring are distributed according to an absolutely continuous transition kernel. In Chapter 3, we assume that the quantities defining the model depend only on the spatial position but not on the configuration of coexisting particles. In this framework (which was considered by Höpfner and Löcherbach (2005) in the special case that branching particles reproduce at their death position), the particle motions are independent, and we can allow for more general Markov processes instead of diffusions. The resulting configuration process is a branching Markov process in the sense introduced by Ikeda, Nagasawa and Watanabe (1968), complemented by an immigration mechanism. Generalizing results obtained by Höpfner and Löcherbach (2005), we give sufficient conditions for ergodicity in the sense of positive recurrence of the configuration process and finiteness of the invariant occupation measure in the case of general particle motions and offspring distributions.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
Zweidimensionale Flüssigkeiten harter Scheiben sind in der Regel einfach zu simulieren, jedoch überraschend schwer theoretisch zu beschreiben. Trotz ihrer hohen Relevanz bleiben die meisten theoretischen Ansätze qualitativ. Hier wird eine Dichtefunktionaltheorie (DFT) vorgestellt, die erstmalig die Struktur solcher Flüssigkeiten bei hohen Dichten korrekt beschreibt und den Ansatz des Gefrierübergangs abbildet.rnEs wird gezeigt, dass der Ansatz der Fundamentalmaßtheorie zu einem solchen Funktional führt. Dabei werden sowohl Dichteverteilungen um ein Testteilchen als auch Zweiteilchen-Korrelationsfunktionen untersucht.rnGrafikkarten bieten sehr hohe Recheneffizienz und ihr Einsatz in der Wissenschaft nimmt stetig zu. In dieser Arbeit werden die Vor- und Nachteile der Grafikkarte für wissenschaftliche Berechnungen erörtert und es wird gezeigt, dass die Berechnung der DFT auf Grafikkarten effizient ausgeführt werden kann. Es wird ein Programm entwickelt, dass dies umsetzt. Dabei wird gezeigt, dass die Ergebnisse einfacher (bekannter) Funktionale mit denen von CPU-Berechnungen übereinstimmen, so dass durch die Nutzung der Grafikkarte keine systematischen Fehler zu erwarten sind.
Resumo:
Intense research is being done in the field of organic photovoltaics in order to synthesize low band-gap organic molecules. These molecules are electron donors which feature in combination with acceptor molecules, typically fullerene derivarntives, forming an active blend. This active blend has phase separated bicontinuous morphology on a nanometer scale. The highest recorded power conversionrnefficiencies for such cells have been 10.6%. Organic semiconductors differ from inorganic ones due to the presence of tightly bonded excitons (electron-hole pairs)resulting from their low dielectric constant (εr ≈2-4). An additional driving force is required to separate such Frenkel excitons since their binding energy (0.3-1 eV) is too large to be dissociated by an electric field alone. This additional driving force arises from the energy difference between the lowest unoccupied molecular orbital (LUMO) of the donor and the acceptor materials. Moreover, the efficiency of the cells also depends on the difference between the highest occupied molecular orbital (HOMO) of the donor and LUMO of the acceptor. Therefore, a precise control and estimation of these energy levels are required. Furthermore any external influences that change the energy levels will cause a degradation of the power conversion efficiency of organic solar cell materials. In particular, the role of photo-induced degradation on the morphology and electrical performance is a major contribution to degradation and needs to be understood on a nanometer scale. Scanning Probe Microscopy (SPM) offers the resolution to image the nanometer scale bicontinuous morphology. In addition SPM can be operated to measure the local contact potential difference (CPD) of materials from which energy levels in the materials can be derived. Thus SPM is an unique method for the characterization of surface morphology, potential changes and conductivity changes under operating conditions. In the present work, I describe investigations of organic photovoltaic materials upon photo-oxidation which is one of the major causes of degradation of these solar cell materials. SPM, Nuclear Magnetic Resonance (NMR) and UV-Vis spectroscopy studies allowed me to identify the chemical reactions occurring inside the active layer upon photo-oxidation. From the measured data, it was possible to deduce the energy levels and explain the various shifts which gave a better understanding of the physics of the device. In addition, I was able to quantify the degradation by correlating the local changes in the CPD and conductivity to the device characteristics, i.e., open circuit voltage and short circuit current. Furthermore, time-resolved electrostatic force microscopy (tr-EFM) allowed us to probe dynamic processes like the charging rate of the individual donor and acceptor domains within the active blend. Upon photo-oxidation, it was observed, that the acceptor molecules got oxidized first preventing the donor polymer from degrading. Work functions of electrodes can be tailored by modifying the interface with monomolecular thin layers of molecules which are made by a chemical reaction in liquids. These modifications in the work function are particularly attractive for opto-electronic devices whose performance depends on the band alignment between the electrodes and the active material. In order to measure the shift in work function on a nanometer scale, I used KPFM in situ, which means in liquids, to follow changes in the work function of Au upon hexadecanethiol adsorption from decane. All the above investigations give us a better understanding of the photo-degradation processes of the active material at the nanoscale. Also, a method to compare various new materials used for organic solar cells for stability is proposed which eliminates the requirement to make fully functional devices saving time and additional engineering efforts.
Resumo:
Die Messung eines möglichen elektrischen Dipolmoments des freien Neutrons erfordert genaustmögliche Kenntnis und Überwachung des magnetischen Feldes im Inneren der n2EDM-Spektrometerkammer. Die freie Spinpräzession von hyperpolarisiertem ³He kann verbunden mit einer Signalauslese mittels optisch gepumpter Cs-Magnetometer dazu genutzt werden, Messempfindlichkeit auf Magnetfeldschwankungen im Bereich weniger Femto-Tesla zu erhalten. Am Institut für Physik der Universität Mainz wurde eine ³He/Cs-Testanlage aufgebaut, um die Möglichkeiten der Signalauslese der ³He-Spinpräzession mittels eines lampengepumpten Cs-Magnetometers zu untersuchen. Darüber hinaus wurde eine ultrakompakte und transportable Polarisationseinheit entwickelt und installiert, welche ermöglicht, eine ³He-Hyperpolarisation von bis zu 55 Prozent zu erreichen. Im Anschluss wird das polarisierte 3He-Gas automatisiert komprimiert und in zwei Magnetometerzellen in Sandwichanordnung innerhalb der n2EDM-Spektrometerkammer gefüllt. In dieser Arbeit werden die Ergebnisse der ersten im Januar 2012 erfolgreich durchgeführten Messungen vorgestellt. Bei diesen Messungen wurde ³He-Gas in der ultrakompakten Polarisationseinheit hyperpolarisiert und über Führungsfelder eines Transfersystems in eine vierlagige Mumetall-Abschirmung transferiert. Im Anschluss konnte im Inneren der magnetischen Abschirmung die freie ³He-Spinpräzession mittels eines lampengepumpten Cs-Magnetometer eindeutig nachgewiesen werden.
Resumo:
A major challenge in imaging is the detection of small amounts of molecules of interest. In the case of magnetic resonance imaging (MRI) their signals are typically concealed by the large background signal of e.g. the tissue of the body. This problem can be tackled by hyperpolarization which increases the NMR signals up to several orders of magnitude. However, this strategy is limited for 1H, the most widely used nucleus in NMR andrnMRI, because the enormous number of protons in the body screen the small amount of hyperpolarized ones.Here, I describe a method giving rise to high 1H MRI contrast for hyperpolarized molecules against a large background signal. The contrast is based on the J-coupling induced rephasing of the NMR signal of molecules hyperpolarized via parahydrogen induce polarization (PHIP) and it can easily be implemented in common pulse sequences.rnrnHyperpolarization methods typically require expensive technical equipment (e.g. lasers or microwaves) and most techniques work only in batch mode, thus the limited lifetime of the hyperpolarization is limiting its applications. Therefore, the second part of my thesis deals with the simple and efficient generation of an hyperpolarization.These two achievements open up alternative opportunities to use the standard MRI nucleus 1H for e.g. metabolic imaging in the future.
Resumo:
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory.rnAs its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained.rnThe constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point.rnFinally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of existing computations, taking the independent running of the Euler topological term into account. Known perturbative results are reproduced in this case from the renormalization group equation, identifying however a unique non-Gaussian fixed point.rn
Resumo:
In this thesis we are presenting a broadly based computer simulation study of two-dimensional colloidal crystals under different external conditions. In order to fully understand the phenomena which occur when the system is being compressed or when the walls are being sheared, it proved necessary to study also the basic motion of the particles and the diffusion processes which occur in the case without these external forces. In the first part of this thesis we investigate the structural transition in the number of rows which occurs when the crystal is being compressed by placing the structured walls closer together. Previous attempts to locate this transition were impeded by huge hysteresis effects. We were able to determine the transition point with higher precision by applying both the Schmid-Schilling thermodynamic integration method and the phase switch Monte Carlo method in order to determine the free energies. These simulations showed not only that the phase switch method can successfully be applied to systems with a few thousand particles and a soft crystalline structure with a superimposed pattern of defects, but also that this method is way more efficient than a thermodynamic integration when free energy differences are to be calculated. Additionally, the phase switch method enabled us to distinguish between several energetically very similar structures and to determine which one of them was actually stable. Another aspect considered in the first result chapter of this thesis is the ensemble inequivalence which can be observed when the structural transition is studied in the NpT and in the NVT ensemble. The second part of this work deals with the basic motion occurring in colloidal crystals confined by structured walls. Several cases are compared where the walls are placed in different positions, thereby introducing an incommensurability into the crystalline structure. Also the movement of the solitons, which are created in the course of the structural transition, is investigated. Furthermore, we will present results showing that not only the well-known mechanism of vacancies and interstitial particles leads to diffusion in our model system, but that also cooperative ring rotation phenomena occur. In this part and the following we applied Langevin dynamics simulations. In the last chapter of this work we will present results on the effect of shear on the colloidal crystal. The shear was implemented by moving the walls with constant velocity. We have observed shear banding and, depending on the shear velocity, that the inner part of the crystal breaks into several domains with different orientations. At very high shear velocities holes are created in the structure, which originate close to the walls, but also diffuse into the inner part of the crystal.
Resumo:
If the generic fibre f−1(c) of a Lagrangian fibration f : X → B on a complex Poisson– variety X is smooth, compact, and connected, it is isomorphic to the compactification of a complex abelian Lie–group. For affine Lagrangian fibres it is not clear what the structure of the fibre is. Adler and van Moerbeke developed a strategy to prove that the generic fibre of a Lagrangian fibration is isomorphic to the affine part of an abelian variety.rnWe extend their strategy to verify that the generic fibre of a given Lagrangian fibration is the affine part of a (C∗)r–extension of an abelian variety. This strategy turned out to be successful for all examples we studied. Additionally we studied examples of Lagrangian fibrations that have the affine part of a ramified cyclic cover of an abelian variety as generic fibre. We obtained an embedding in a Lagrangian fibration that has the affine part of a C∗–extension of an abelian variety as generic fibre. This embedding is not an embedding in the category of Lagrangian fibrations. The C∗–quotient of the new Lagrangian fibration defines in a natural way a deformation of the cyclic quotient of the original Lagrangian fibration.
Resumo:
Die Forschungsarbeit siedelt sich im Dreieck der Erziehungswissenschaften, der Informatik und der Schulpraxis an und besitzt somit einen starken interdisziplinären Charakter. Aus Sicht der Erziehungswissenschaften handelt es sich um ein Forschungsprojekt aus den Bereichen E-Learning und Multimedia Learning und der Fragestellung nach geeigneten Informatiksystemen für die Herstellung und den Austausch von digitalen, multimedialen und interaktiven Lernbausteinen. Dazu wurden zunächst methodisch-didaktische Vorteile digitaler Lerninhalte gegenüber klassischen Medien wie Buch und Papier zusammengetragen und mögliche Potentiale im Zusammenhang mit neuen Web2.0-Technologien aufgezeigt. Darauf aufbauend wurde für existierende Autorenwerkzeuge zur Herstellung digitaler Lernbausteine und bestehende Austauschplattformen analysiert, inwieweit diese bereits Web 2.0-Technologien unterstützen und nutzen. Aus Sicht der Informatik ergab sich aus der Analyse bestehender Systeme ein Anforderungsprofil für ein neues Autorenwerkzeug und eine neue Austauschplattform für digitale Lernbausteine. Das neue System wurde nach dem Ansatz des Design Science Research in einem iterativen Entwicklungsprozess in Form der Webapplikation LearningApps.org realisiert und stetig mit Lehrpersonen aus der Schulpraxis evaluiert. Bei der Entwicklung kamen aktuelle Web-Technologien zur Anwendung. Das Ergebnis der Forschungsarbeit ist ein produktives Informatiksystem, welches bereits von tausenden Nutzern in verschiedenen Ländern sowohl in Schulen als auch in der Wirtschaft eingesetzt wird. In einer empirischen Studie konnte das mit der Systementwicklung angestrebte Ziel, die Herstellung und den Austausch von digitalen Lernbausteinen zu vereinfachen, bestätigt werden. Aus Sicht der Schulpraxis liefert LearningApps.org einen Beitrag zur Methodenvielfalt und zur Nutzung von ICT im Unterricht. Die Ausrichtung des Werkzeugs auf mobile Endgeräte und 1:1-Computing entspricht dem allgemeinen Trend im Bildungswesen. Durch die Verknüpfung des Werkzeugs mit aktuellen Software Entwicklungen zur Herstellung von digitalen Schulbüchern werden auch Lehrmittelverlage als Zielgruppe angesprochen.
Resumo:
Geometric packing problems may be formulated mathematically as constrained optimization problems. But finding a good solution is a challenging task. The more complicated the geometry of the container or the objects to be packed, the more complex the non-penetration constraints become. In this work we propose the use of a physics engine that simulates a system of colliding rigid bodies. It is a tool to resolve interpenetration conflicts and to optimize configurations locally. We develop an efficient and easy-to-implement physics engine that is specialized for collision detection and contact handling. In succession of the development of this engine a number of novel algorithms for distance calculation and intersection volume were designed and imple- mented, which are presented in this work. They are highly specialized to pro- vide fast responses for cuboids and triangles as input geometry whereas the concepts they are based on can easily be extended to other convex shapes. Especially noteworthy in this context is our ε-distance algorithm - a novel application that is not only very robust and fast but also compact in its im- plementation. Several state-of-the-art third party implementations are being presented and we show that our implementations beat them in runtime and robustness. The packing algorithm that lies on top of the physics engine is a Monte Carlo based approach implemented for packing cuboids into a container described by a triangle soup. We give an implementation for the SAE J1100 variant of the trunk packing problem. We compare this implementation to several established approaches and we show that it gives better results in faster time than these existing implementations.
Resumo:
Spectroscopy of the 1S-2S transition of antihydrogen confined in a neutral atom trap and comparison with the equivalent spectral line in hydrogen will provide an accurate test of CPT symmetry and the first one in a mixed baryon-lepton system. Also, with neutral antihydrogen atoms, the gravitational interaction between matter and antimatter can be tested unperturbed by the much stronger Coulomb forces.rnAntihydrogen is regularly produced at CERN's Antiproton Decelerator by three-body-recombination (TBR) of one antiproton and two positrons. The method requires injecting antiprotons into a cloud of positrons, which raises the average temperature of the antihydrogen atoms produced way above the typical 0.5 K trap depths of neutral atom traps. Therefore only very few antihydrogen atoms can be confined at a time. Precision measurements, like laser spectroscopy, will greatly benefit from larger numbers of simultaneously trapped antihydrogen atoms.rnTherefore, the ATRAP collaboration developed a different production method that has the potential to create much larger numbers of cold, trappable antihydrogen atoms. Positrons and antiprotons are stored and cooled in a Penning trap in close proximity. Laser excited cesium atoms collide with the positrons, forming Rydberg positronium, a bound state of an electron and a positron. The positronium atoms are no longer confined by the electric potentials of the Penning trap and some drift into the neighboring cloud of antiprotons where, in a second charge exchange collision, they form antihydrogen. The antiprotons remain at rest during the entire process, so much larger numbers of trappable antihydrogen atoms can be produced. Laser excitation is necessary to increase the efficiency of the process since the cross sections for charge-exchange collisions scale with the fourth power of the principal quantum number n.rnThis method, named double charge-exchange, was demonstrated by ATRAP in 2004. Since then, ATRAP constructed a new combined Penning Ioffe trap and a new laser system. The goal of this thesis was to implement the double charge-exchange method in this new apparatus and increase the number of antihydrogen atoms produced.rnCompared to our previous experiment, we could raise the numbers of positronium and antihydrogen atoms produced by two orders of magnitude. Most of this gain is due to the larger positron and antiproton plasmas available by now, but we could also achieve significant improvements in the efficiencies of the individual steps. We therefore showed that the double charge-exchange can produce comparable numbers of antihydrogen as the TBR method, but the fraction of cold, trappable atoms is expected to be much higher. Therefore this work is an important step towards precision measurements with trapped antihydrogen atoms.
Resumo:
Among the different approaches for a construction of a fundamental quantum theory of gravity the Asymptotic Safety scenario conjectures that quantum gravity can be defined within the framework of conventional quantum field theory, but only non-perturbatively. In this case its high energy behavior is controlled by a non-Gaussian fixed point of the renormalization group flow, such that its infinite cutoff limit can be taken in a well defined way. A theory of this kind is referred to as non-perturbatively renormalizable. In the last decade a considerable amount of evidence has been collected that in four dimensional metric gravity such a fixed point, suitable for the Asymptotic Safety construction, indeed exists. This thesis extends the Asymptotic Safety program of quantum gravity by three independent studies that differ in the fundamental field variables the investigated quantum theory is based on, but all exhibit a gauge group of equivalent semi-direct product structure. It allows for the first time for a direct comparison of three asymptotically safe theories of gravity constructed from different field variables. The first study investigates metric gravity coupled to SU(N) Yang-Mills theory. In particular the gravitational effects to the running of the gauge coupling are analyzed and its implications for QED and the Standard Model are discussed. The second analysis amounts to the first investigation on an asymptotically safe theory of gravity in a pure tetrad formulation. Its renormalization group flow is compared to the corresponding approximation of the metric theory and the influence of its enlarged gauge group on the UV behavior of the theory is analyzed. The third study explores Asymptotic Safety of gravity in the Einstein-Cartan setting. Here, besides the tetrad, the spin connection is considered a second fundamental field. The larger number of independent field components and the enlarged gauge group render any RG analysis of this system much more difficult than the analog metric analysis. In order to reduce the complexity of this task a novel functional renormalization group equation is proposed, that allows for an evaluation of the flow in a purely algebraic manner. As a first example of its suitability it is applied to a three dimensional truncation of the form of the Holst action, with the Newton constant, the cosmological constant and the Immirzi parameter as its running couplings. A detailed comparison of the resulting renormalization group flow to a previous study of the same system demonstrates the reliability of the new equation and suggests its use for future studies of extended truncations in this framework.
Resumo:
Seit Anbeginn der Menschheitsgeschichte beeinflussen die Menschen ihre Umwelt. Durch anthropogene Emissionen ändert sich die Zusammensetzung der Atmosphäre, was einen zunehmenden Einfluss unter anderem auf die Atmosphärenchemie, die Gesundheit von Mensch, Flora und Fauna und das Klima hat. Die steigende Anzahl riesiger, wachsender Metropolen geht einher mit einer räumlichen Konzentration der Emission von Luftschadstoffen, was vor allem einen Einfluss auf die Luftqualität der windabwärts gelegenen ruralen Regionen hat. In dieser Doktorarbeit wurde im Rahmen des MEGAPOLI-Projektes die Abluftfahne der Megastadt Paris unter Anwendung des mobilen Aerosolforschungslabors MoLa untersucht. Dieses ist mit modernen, zeitlich hochauflösenden Instrumenten zur Messung der chemischen Zusammensetzung und Größenverteilung der Aerosolpartikel sowie einiger Spurengase ausgestattet. Es wurden mobile Messstrategien entwickelt und angewendet, die besonders geeignet zur Charakterisierung urbaner Emissionen sind. Querschnittsmessfahrten durch die Abluftfahne und atmosphärische Hintergrundluftmassen erlaubten sowohl die Bestimmung der Struktur und Homogenität der Abluftfahne als auch die Berechnung des Beitrags der urbanen Emissionen zur Gesamtbelastung der Atmosphäre. Quasi-Lagrange’sche Radialmessfahrten dienten der Erkundung der räumlichen Erstreckung der Abluftfahne sowie auftretender Transformationsprozesse der advehierten Luftschadstoffe. In Kombination mit Modellierungen konnte die Struktur der Abluftfahne vertieft untersucht werden. Flexible stationäre Messungen ergänzten den Datensatz und ließen zudem Vergleichsmessungen mit anderen Messstationen zu. Die Daten einer ortsfesten Messstation wurden zusätzlich verwendet, um die Alterung des organischen Partikelanteils zu beschreiben. Die Analyse der mobilen Messdaten erforderte die Entwicklung einer neuen Methode zur Bereinigung des Datensatzes von lokalen Störeinflüssen. Des Weiteren wurden die Möglichkeiten, Grenzen und Fehler bei der Anwendung komplexer Analyseprogramme zur Berechnung des O/C-Verhältnisses der Partikel sowie der Klassifizierung der Aerosolorganik untersucht. Eine Validierung verschiedener Methoden zur Bestimmung der Luftmassenherkunft war für die Auswertung ebenfalls notwendig. Die detaillierte Untersuchung der Abluftfahne von Paris ergab, dass diese sich anhand der Erhöhung der Konzentrationen von Indikatoren für unprozessierte Luftverschmutzung im Vergleich zu Hintergrundwerten identifizieren lässt. Ihre eher homogene Struktur kann zumeist durch eine Gauß-Form im Querschnitt mit einem exponentiellen Abfall der unprozessierten Schadstoffkonzentrationen mit zunehmender Distanz zur Stadt beschrieben werden. Hierfür ist hauptsächlich die turbulente Vermischung mit Umgebungsluftmassen verantwortlich. Es konnte nachgewiesen werden, dass in der advehierten Abluftfahne eine deutliche Oxidation der Aerosolorganik im Sommer stattfindet; im Winter hingegen ließ sich dieser Prozess während der durchgeführten Messungen nicht beobachten. In beiden Jahreszeiten setzt sich die Abluftfahne hauptsächlich aus Ruß und organischen Partikelkomponenten im PM1-Größenbereich zusammen, wobei die Quellen Verkehr und Kochen sowie zusätzlich Heizen in der kalten Jahreszeit dominieren. Die PM1-Partikelmasse erhöhte sich durch die urbanen Emissionen im Vergleich zum Hintergrundwert im Sommer in der Abluftfahne im Mittel um 30% und im Winter um 10%. Besonders starke Erhöhungen ließen sich für Polyaromaten beobachten, wo im Sommer eine mittlere Zunahme von 194% und im Winter von 131% vorlag. Jahreszeitliche Unterschiede waren ebenso in der Größenverteilung der Partikel der Abluftfahne zu finden, wo im Winter im Gegensatz zum Sommer keine zusätzlichen nukleierten kleinen Partikel, sondern nur durch Kondensation und Koagulation angewachsene Partikel zwischen etwa 10nm und 200nm auftraten. Die Spurengaskonzentrationen unterschieden sich ebenfalls, da chemische Reaktionen temperatur- und mitunter strahlungsabhängig sind. Weitere Anwendungsmöglichkeiten des MoLa wurden bei einer Überführungsfahrt von Deutschland an die spanische Atlantikküste demonstriert, woraus eine Kartierung der Luftqualität entlang der Fahrtroute resultierte. Es zeigte sich, dass hauptsächlich urbane Ballungszentren von unprozessierten Luftschadstoffen betroffen sind, advehierte gealterte Substanzen jedoch jede Region beeinflussen können. Die Untersuchung der Luftqualität an Standorten mit unterschiedlicher Exposition bezüglich anthropogener Quellen erweiterte diese Aussage um einen Einblick in die Variation der Luftqualität, abhängig unter anderem von der Wetterlage und der Nähe zu Emissionsquellen. Damit konnte gezeigt werden, dass sich die entwickelten Messstrategien und Analysemethoden nicht nur zur Untersuchung der Abluftfahne einer Großstadt, sondern auch auf verschiedene andere wissenschaftliche und umweltmesstechnische Fragestellungen anwenden lassen.
Resumo:
Das in dieser Arbeit vorgestellte Experiment zur Messung des magnetischen Moments des Protons basiert auf der Messung des Verhältnisses von Zyklotronfrequenz und Larmorfrequenz eines einzelnen, in einer kryogenen Doppel-Penning Falle gespeicherten Protons. In dieser Arbeit konnten erstmalig zwei der drei Bewegungsfrequenzen des Protons gleichzeitig im thermischen Gleichgewicht mit entsprechenden hochsensitiven Nachweissystemen nicht-destruktiv detektiert werden, wodurch die Messzeit zur Bestimmung der Zyklotronfrequenz halbiert werden konnte. Ferner wurden im Rahmen dieser Arbeit erstmalig einzelne Spin-Übergänge eines einzelnen Protons detektiert, wodurch die Bestimmung der Larmorfrequenz ermöglicht wird. Mithilfe des kontinuierlichen Stern-Gerlach Effekts wird durch eine sogenannte magnetische Flasche das magnetische Moment an die axiale Bewegungsmode des Protons gekoppelt. Eine Änderung des Spinzustands verursacht folglich einen Frequenzsprung der axialen Bewegungsfrequenz, welche nicht-destruktiv gemessen werden kann. Erschwert wird die Detektion des Spinzustands dadurch, dass die axiale Frequenz nicht nur vom Spinmoment, sondern auch vom Bahnmoment abhängt. Die große experimentelle Herausforderung besteht also in der Verhinderung von Energieschwankungen in den radialen Bewegungsmoden, um die Detektierbarkeit von Spin-Übergängen zu gewährleisten. Durch systematische Studien zur Stabilität der axialen Frequenz sowie einer kompletten Überarbeitung des experimentellen Aufbaus, konnte dieses Ziel erreicht werden. Erstmalig kann der Spinzustand eines einzelnen Protons mit hoher Zuverlässigkeit bestimmt werden. Somit stellt diese Arbeit einen entscheidenden Schritt auf dem Weg zu einer hochpräzisen Messung des magnetischen Moments des Protons dar.