13 resultados para Lagrangian particle tracking method
em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha
Resumo:
The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using XRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn
Resumo:
Polymer nanoparticles functionalized on the surface with photo-responsive labels were synthesized. In a first synthetic step, polystyrene was copolymerized with the cross-linker divinylbenzene and poly(ethylene glycol) acrylate in a miniemulsion, to produce nano-sized spheres (~ 60 nm radius) with terminal hydroxyl groups, which were functionalized in a subsequent synthetic step with photo-responsive labels. For this purpose, two photo-active molecular structures were separately used: anthracene, which is well known to form covalently bonded dimers upon photo-excitation; and pyrene, which only forms short lived excited state dimers (excimers). Acid derivatives of these labels (9-anthracene carboxylic acid and 1-pyrene butyric acid) were bonded to the hydroxyl terminal groups of the nanoparticles through an esterification reaction, via the intermediate formation of the corresponding acid chloride.rnThe obtained labeled nanoparticles appeared to be highly hydrophobic structures. They formed lyophobic suspensions in water, which after analysis by dynamic light scattering (DLS) and ultramicroscopic particle tracking, appeared to equilibrate as a collection of singly dispersed nanoparticles, together with a few nanoparticle aggregates. The relative amount of aggregates decreased with increasing amounts of the surfactant sodium dodecyl sulfate (SDS), thus confirming that aggregation is an equilibrated state resulting from lyophobicity. The formation of such aggregates was corroborated using scanning electron microscopy (SEM). The photo-irradiation of the lyophobic aqueous suspensions of anthracene labeled nanoparticles (An-NP) resulted in the formation of higher aggregates, as evidenced by DLS and ultramicroscopy. The obtained state of aggregation could be reverted by sonication. The possibility to re-aggregate the system in subsequent photo-excitation and sonication cycles was established. Likewise, the photo-irradiation of lyophobic aqueous suspensions of pyrene-labeled nanoparticles (Py-NP) resulted in the formation of higher aggregates, as evidenced by DLS and ultramicroscopy. These appeared to remain aggregated due to hydrophobic interactions. This system could also be re-dispersed by sonication and re-aggregated in subsequent cycles of photo-excitation and sonication.
Resumo:
This thesis presents new methods to simulate systems with hydrodynamic and electrostatic interactions. Part 1 is devoted to computer simulations of Brownian particles with hydrodynamic interactions. The main influence of the solvent on the dynamics of Brownian particles is that it mediates hydrodynamic interactions. In the method, this is simulated by numerical solution of the Navier--Stokes equation on a lattice. To this end, the Lattice--Boltzmann method is used, namely its D3Q19 version. This model is capable to simulate compressible flow. It gives us the advantage to treat dense systems, in particular away from thermal equilibrium. The Lattice--Boltzmann equation is coupled to the particles via a friction force. In addition to this force, acting on {it point} particles, we construct another coupling force, which comes from the pressure tensor. The coupling is purely local, i.~e. the algorithm scales linearly with the total number of particles. In order to be able to map the physical properties of the Lattice--Boltzmann fluid onto a Molecular Dynamics (MD) fluid, the case of an almost incompressible flow is considered. The Fluctuation--Dissipation theorem for the hybrid coupling is analyzed, and a geometric interpretation of the friction coefficient in terms of a Stokes radius is given. Part 2 is devoted to the simulation of charged particles. We present a novel method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. This algorithm scales linearly, too. We focus on the Molecular Dynamics version of the method and show that it is intimately related to the Car--Parrinello approach, while being equivalent to solving Maxwell's equations with freely adjustable speed of light. The Lagrangian formulation of the coupled particles--fields system is derived. The quasi--Hamiltonian dynamics of the system is studied in great detail. For implementation on the computer, the equations of motion are discretized with respect to both space and time. The discretization of the electromagnetic fields on a lattice, as well as the interpolation of the particle charges on the lattice is given. The algorithm is as local as possible: Only nearest neighbors sites of the lattice are interacting with a charged particle. Unphysical self--energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green's function. The method allows easy parallelization using standard domain decomposition. Some benchmarking results of the algorithm are presented and discussed.
Resumo:
The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.
Resumo:
Reactive halogen compounds are known to play an important role in a wide variety of atmospheric processes such as atmospheric oxidation capacity and coastal new particle formation. In this work, novel analytical approaches combining diffusion denuder/impinger sampling techniques with gas chromatographicmass spectrometric (GCMS) determination are developed to measure activated chlorine compounds (HOCl and Cl2), activated bromine compounds (HOBr, Br2, BrCl, and BrI), activated iodine compounds (HOI and ICl), and molecular iodine (I2). The denuder/GCMS methods have been used to field measurements in the marine boundary layer (MBL). High mixing ratios (of the order of 100 ppt) of activated halogen compounds and I2 are observed in the coastal MBL in Ireland, which explains the ozone destruction observed. The emission of I2 is found to correlate inversely with tidal height and correlate positively with the levels of O3 in the surrounding air. In addition the release is found to be dominated by algae species compositions and biomass density, which proves the hot-spot hypothesis of atmospheric iodine chemistry. The observations of elevated I2 concentrations substantially support the existence of higher concentrations of littoral iodine oxides and thus the connection to the strong ultra-fine particle formation events in the coastal MBL.
Resumo:
In this thesis, the influence of composition changes on the glass transition behavior of binary liquids in two and three spatial dimensions (2D/3D) is studied in the framework of mode-coupling theory (MCT).The well-established MCT equations are generalized to isotropic and homogeneous multicomponent liquids in arbitrary spatial dimensions. Furthermore, a new method is introduced which allows a fast and precise determination of special properties of glass transition lines. The new equations are then applied to the following model systems: binary mixtures of hard disks/spheres in 2D/3D, binary mixtures of dipolar point particles in 2D, and binary mixtures of dipolar hard disks in 2D. Some general features of the glass transition lines are also discussed. The direct comparison of the binary hard disk/sphere models in 2D/3D shows similar qualitative behavior. Particularly, for binary mixtures of hard disks in 2D the same four so-called mixing effects are identified as have been found before by Gtze and Voigtmann for binary hard spheres in 3D [Phys. Rev. E 67, 021502 (2003)]. For instance, depending on the size disparity, adding a second component to a one-component liquid may lead to a stabilization of either the liquid or the glassy state. The MCT results for the 2D system are on a qualitative level in agreement with available computer simulation data. Furthermore, the glass transition diagram found for binary hard disks in 2D strongly resembles the corresponding random close packing diagram. Concerning dipolar systems, it is demonstrated that the experimental system of Knig et al. [Eur. Phys. J. E 18, 287 (2005)] is well described by binary point dipoles in 2D through a comparison between the experimental partial structure factors and those from computer simulations. For such mixtures of point particles it is demonstrated that MCT predicts always a plasticization effect, i.e. a stabilization of the liquid state due to mixing, in contrast to binary hard disks in 2D or binary hard spheres in 3D. It is demonstrated that the predicted plasticization effect is in qualitative agreement with experimental results. Finally, a glass transition diagram for binary mixtures of dipolar hard disks in 2D is calculated. These results demonstrate that at higher packing fractions there is a competition between the mixing effects occurring for binary hard disks in 2D and those for binary point dipoles in 2D.
Resumo:
The Standard Model of elementary particle physics was developed to describe the fundamental particles which constitute matter and the interactions between them. The Large Hadron Collider (LHC) at CERN in Geneva was built to solve some of the remaining open questions in the Standard Model and to explore physics beyond it, by colliding two proton beams at world-record centre-of-mass energies. The ATLAS experiment is designed to reconstruct particles and their decay products originating from these collisions. The precise reconstruction of particle trajectories plays an important role in the identification of particle jets which originate from bottom quarks (b-tagging). This thesis describes the step-wise commissioning of the ATLAS track reconstruction and b-tagging software and one of the first measurements of the b-jet production cross section in pp collisions at sqrt(s)=7 TeV with the ATLAS detector. The performance of the track reconstruction software was studied in great detail, first using data from cosmic ray showers and then collisions at sqrt(s)=900 GeV and 7 TeV. The good understanding of the track reconstruction software allowed a very early deployment of the b-tagging algorithms. First studies of these algorithms and the measurement of the b-tagging efficiency in the data are presented. They agree well with predictions from Monte Carlo simulations. The b-jet production cross section was measured with the 2010 dataset recorded by the ATLAS detector, employing muons in jets to estimate the fraction of b-jets. The measurement is in good agreement with the Standard Model predictions.
Resumo:
Bisher ist bei forensischen Untersuchungen von Explosionen die Rckverfolgung der verwendeten Sprengstoffe begrenzt, da das Material in aller Regel bei der Explosion zerstrt wird. Die Rckverfolgung von Sprengstoffen soll mit Hilfe von Identifikations-Markierungssubstanzen erleichtert werden. Diese stellen einen einzigartigen Code dar, der auch nach einer Sprengung wiedergefunden und identifiziert werden kann. Die dem Code zugeordneten, eindeutigen Informationen knnen somit ausgelesen werden und liefern der Polizei bei der Aufklrung weitere Anstze.rnZiel der vorliegenden Arbeit ist es, das Verhalten von ausgewhlten Seltenerdelementen (SEE) bei Explosion zu untersuchen. Ein auf Lanthanoidphosphaten basierender Identifikations-Markierungsstoff bietet die Mglichkeit, verschiedene Lanthanoide innerhalb eines einzelnen Partikels zu kombinieren, wodurch eine Vielzahl von Codes generiert werden kann. Somit kann eine Vernderung der Ausgangszusammensetzung des Codes auch nach einer Explosion durch die Analyse eines einzelnen Partikels sehr gut nachvollzogen und somit die Eignung des Markierungsstoffes untersucht werden. Eine weitere Zielsetzung ist die berprfung der Anwendbarkeit der Massenspektrometrie mit induktiv gekoppeltem Plasma (ICP-MS) und Partikelanalyse mittels Rasterelektronenmikroskopie (REM) fr die Analyse der versprengten Identifikations-Markierungssubstanzen. rnDie Ergebnisbetrachtungen der ICP-MS-Analyse und REM-Partikelanalyse deuten zusammenfassend auf eine Fraktionierung der untersuchten Lanthanoide oder deren Umsetzungsprodukte nach Explosion in Abhngigkeit ihrer thermischen Belastbarkeit. Die Befunde zeigen eine Anreicherung der Lanthanoide mit hherer Temperaturbestndigkeit in greren Partikeln, was eine Anreicherung von Lanthanoiden mit niedrigerer Temperaturbestndigkeit in kleineren Partikeln impliziert. Dies lsst sich in Anstzen durch einen Fraktionierungsprozess in Abhngigkeit der Temperaturstabilitt der Lanthanoide oder deren Umsetzungsprodukten erklren. Die der Fraktionierung zugrunde liegenden Mechanismen und deren gegenseitige Beeinflussung bei einer Explosion konnten im Rahmen dieser Arbeit nicht abschlieend geklrt werden.rnDie generelle Anwendbarkeit und unter Umstnden notwendige, komplementre Verwendung der zwei Methoden ICP-MS und REM-Partikelanalyse wird in dieser Arbeit gezeigt. Die ICP-MS stellt mit groer untersuchter Probenflche und hoher Genauigkeit eine gute Methode zur Charakterisierung der Konzentrationsverhltnisse der untersuchten Lanthanoide dar. Die REM-Partikelanalyse hingegen ermglicht im Falle von Kontamination der Proben mit anderen Lanthanoid-haltigen Partikeln eine eindeutige Differenzierung der Elementvergesellschaftung pro Partikel. Sie kann somit im Gegensatz zur ICP-MS Aufschluss ber die Art und Zusammensetzung der Kontamination geben. rnInnerhalb der vorgenommenen Untersuchungen stellte die bei der ICP-MS angewandte Probennahmetechnik eine ideale Art der Probennahme dar. Bei anderen Oberflchen knnte diese jedoch in Folge der in verschiedenen Partikelgren resultierenden Fraktionierung zu systematisch verflschten Ergebnissen fhren. Um die generelle Anwendbarkeit der ICP-MS im Hinblick auf die Analyse versprengter Lanthanoide zu gewhrleisten, sollte eine Durchfhrung weiterer Sprengungen auf unterschiedlichen Probenoberflchen erfolgen und gegebenenfalls weitere Probennahme-, Aufschluss- und Anreicherungsverfahren evaluiert werden.rn
Resumo:
Among all possible realizations of quark and antiquark assembly, the nucleon (the proton and the neutron) is the most stable of all hadrons and consequently has been the subject of intensive studies. Mass, shape, radius and more complex representations of its internal structure are measured since several decades using different probes. The proton (spin 1/2) is described by the electric GE and magnetic GM form factors which characterise its internal structure. The simplest way to measure the proton form factors consists in measuring the angular distribution of the electron-proton elastic scattering accessing the so-called Space-Like region where q2 < 0. Using the crossed channel antiproton proton <--> e+e-, one accesses another kinematical region, the so-called Time-Like region where q2 > 0. However, due to the antiproton proton <--> e+e- threshold q2th, only the kinematical domain q2 > q2th > 0 is available. To access the unphysical region, one may use the antiproton proton --> pi0 e+ e- reaction where the pi0 takes away a part of the system energy allowing q2 to be varied between q2th and almost 0. This thesis aims to show the feasibility of such measurements with the PANDA detector which will be installed on the new high intensity antiproton ring at the FAIR facility at Darmstadt. To describe the antiproton proton --> pi0 e+ e- reaction, a Lagrangian based approach is developed. The 5-fold differential cross section is determined and related to linear combinations of hadronic tensors. Under the assumption of one nucleon exchange, the hadronic tensors are expressed in terms of the 2 complex proton electromagnetic form factors. An extraction method which provides an access to the proton electromagnetic form factor ratio R = |GE|/|GM| and for the first time in an unpolarized experiment to the cosine of the phase difference is developed. Such measurements have never been performed in the unphysical region up to now. Extended simulations were performed to show how the ratio R and the cosine can be extracted from the positron angular distribution. Furthermore, a model is developed for the antiproton proton --> pi0 pi+ pi- background reaction considered as the most dangerous one. The background to signal cross section ratio was estimated under different cut combinations of the particle identification information from the different detectors and of the kinematic fits. The background contribution can be reduced to the percent level or even less. The corresponding signal efficiency ranges from a few % to 30%. The precision on the determination of the ratio R and of the cosine is determined using the expected counting rates via Monte Carlo method. A part of this thesis is also dedicated to more technical work with the study of the prototype of the electromagnetic calorimeter and the determination of its resolution.
Resumo:
Seit Anbeginn der Menschheitsgeschichte beeinflussen die Menschen ihre Umwelt. Durch anthropogene Emissionen ndert sich die Zusammensetzung der Atmosphre, was einen zunehmenden Einfluss unter anderem auf die Atmosphrenchemie, die Gesundheit von Mensch, Flora und Fauna und das Klima hat. Die steigende Anzahl riesiger, wachsender Metropolen geht einher mit einer rumlichen Konzentration der Emission von Luftschadstoffen, was vor allem einen Einfluss auf die Luftqualitt der windabwrts gelegenen ruralen Regionen hat. In dieser Doktorarbeit wurde im Rahmen des MEGAPOLI-Projektes die Abluftfahne der Megastadt Paris unter Anwendung des mobilen Aerosolforschungslabors MoLa untersucht. Dieses ist mit modernen, zeitlich hochauflsenden Instrumenten zur Messung der chemischen Zusammensetzung und Grenverteilung der Aerosolpartikel sowie einiger Spurengase ausgestattet. Es wurden mobile Messstrategien entwickelt und angewendet, die besonders geeignet zur Charakterisierung urbaner Emissionen sind. Querschnittsmessfahrten durch die Abluftfahne und atmosphrische Hintergrundluftmassen erlaubten sowohl die Bestimmung der Struktur und Homogenitt der Abluftfahne als auch die Berechnung des Beitrags der urbanen Emissionen zur Gesamtbelastung der Atmosphre. Quasi-Lagrangesche Radialmessfahrten dienten der Erkundung der rumlichen Erstreckung der Abluftfahne sowie auftretender Transformationsprozesse der advehierten Luftschadstoffe. In Kombination mit Modellierungen konnte die Struktur der Abluftfahne vertieft untersucht werden. Flexible stationre Messungen ergnzten den Datensatz und lieen zudem Vergleichsmessungen mit anderen Messstationen zu. Die Daten einer ortsfesten Messstation wurden zustzlich verwendet, um die Alterung des organischen Partikelanteils zu beschreiben. Die Analyse der mobilen Messdaten erforderte die Entwicklung einer neuen Methode zur Bereinigung des Datensatzes von lokalen Streinflssen. Des Weiteren wurden die Mglichkeiten, Grenzen und Fehler bei der Anwendung komplexer Analyseprogramme zur Berechnung des O/C-Verhltnisses der Partikel sowie der Klassifizierung der Aerosolorganik untersucht. Eine Validierung verschiedener Methoden zur Bestimmung der Luftmassenherkunft war fr die Auswertung ebenfalls notwendig. Die detaillierte Untersuchung der Abluftfahne von Paris ergab, dass diese sich anhand der Erhhung der Konzentrationen von Indikatoren fr unprozessierte Luftverschmutzung im Vergleich zu Hintergrundwerten identifizieren lsst. Ihre eher homogene Struktur kann zumeist durch eine Gau-Form im Querschnitt mit einem exponentiellen Abfall der unprozessierten Schadstoffkonzentrationen mit zunehmender Distanz zur Stadt beschrieben werden. Hierfr ist hauptschlich die turbulente Vermischung mit Umgebungsluftmassen verantwortlich. Es konnte nachgewiesen werden, dass in der advehierten Abluftfahne eine deutliche Oxidation der Aerosolorganik im Sommer stattfindet; im Winter hingegen lie sich dieser Prozess whrend der durchgefhrten Messungen nicht beobachten. In beiden Jahreszeiten setzt sich die Abluftfahne hauptschlich aus Ru und organischen Partikelkomponenten im PM1-Grenbereich zusammen, wobei die Quellen Verkehr und Kochen sowie zustzlich Heizen in der kalten Jahreszeit dominieren. Die PM1-Partikelmasse erhhte sich durch die urbanen Emissionen im Vergleich zum Hintergrundwert im Sommer in der Abluftfahne im Mittel um 30% und im Winter um 10%. Besonders starke Erhhungen lieen sich fr Polyaromaten beobachten, wo im Sommer eine mittlere Zunahme von 194% und im Winter von 131% vorlag. Jahreszeitliche Unterschiede waren ebenso in der Grenverteilung der Partikel der Abluftfahne zu finden, wo im Winter im Gegensatz zum Sommer keine zustzlichen nukleierten kleinen Partikel, sondern nur durch Kondensation und Koagulation angewachsene Partikel zwischen etwa 10nm und 200nm auftraten. Die Spurengaskonzentrationen unterschieden sich ebenfalls, da chemische Reaktionen temperatur- und mitunter strahlungsabhngig sind. Weitere Anwendungsmglichkeiten des MoLa wurden bei einer berfhrungsfahrt von Deutschland an die spanische Atlantikkste demonstriert, woraus eine Kartierung der Luftqualitt entlang der Fahrtroute resultierte. Es zeigte sich, dass hauptschlich urbane Ballungszentren von unprozessierten Luftschadstoffen betroffen sind, advehierte gealterte Substanzen jedoch jede Region beeinflussen knnen. Die Untersuchung der Luftqualitt an Standorten mit unterschiedlicher Exposition bezglich anthropogener Quellen erweiterte diese Aussage um einen Einblick in die Variation der Luftqualitt, abhngig unter anderem von der Wetterlage und der Nhe zu Emissionsquellen. Damit konnte gezeigt werden, dass sich die entwickelten Messstrategien und Analysemethoden nicht nur zur Untersuchung der Abluftfahne einer Grostadt, sondern auch auf verschiedene andere wissenschaftliche und umweltmesstechnische Fragestellungen anwenden lassen.
Resumo:
Die Herstellung von Polymer-Solarzellen aus wssriger Phase stellt eine attraktive Alternative zu der konventionellen lsemittelbasierten Formulierung dar. Die Vorteile der aus wssriger Lsung hergestellten Solarzellen liegen besonders in dem umweltschonenden Herstellungsprozess und in der Mglichkeit, druckbare optoelektronische Bauteile zu generieren. Die Prozessierbarkeit von hydrophoben Halbleitern im wssrigen Milieu wird durch die Dispergierung der Materialien, in Form von Nanopartikeln, erreicht. Der Transfer der Halbleiter in eine Dispersion erfolgt ber die Lsemittelverdampfungsmethode. Die Idee der Verwendung von partikelbasierte Solarzellen wurde bereits umgesetzt, allerdings blieben eine genaue Charakterisierung der Partikel sowie ein umfassendes Verstndnis des gesamten Fabrikationsvorgangs aus. Deshalb besteht das Ziel dieser Arbeit darin, einen detaillierten Einblick in den Herstellungsprozess von partikelbasierten Solarzellen zu erlangen, mgliche Schwchen aufzudecken, diese zu beseitigen, um so zuknftige Anwendungen zu verbessern. Zur Herstellung von Solarzellen aus wssrigen Dispersionen wurde Poly(3-hexylthiophen-2,5-diyl)/[6,6]-Phenyl-C61-Buttersure-Methylester (P3HT/PCBM) als Donor/Akzeptor-System verwendet. Die Kernpunkte der Untersuchungen richteten sich zum einen die auf Partikelmorphologie und zum anderen auf die Generierung einer geeigneten Partikelschicht. Beide Parameter haben Auswirkungen auf die Solarzelleneffizienz. Die Morphologie wurde sowohl spektroskopisch ber Photolumineszenz-Messungen, als auch visuell mittels Elektronenmikroskopie ermittelt. Auf diese Weise konnte die Partikelmorphologie vollstndig aufgeklrt werden, wobei Parallelen zu der Struktur von lsemittelbasierten Solarzellen gefunden wurden. Zudem wurde eine Abhngigkeit der Morphologie von der Prparationstemperatur beobachtet, was eine einfache Steuerung der Partikelstruktur ermglicht. Im Zuge der Partikelschichtausbildung wurden direkte sowie grenzflchenvermittelnde Beschichtungsmethoden herangezogen. Von diesen Techniken hatte sich aber nur die Rotationsbeschichtung als brauchbare Methode erwiesen, Partikel aus der Dispersion in einen homogenen Film zu berfhren. Des Weiteren stand die Aufarbeitung der Partikelschicht durch Ethanol-Waschung und thermische Behandlung im Fokus dieser Arbeit. Beide Manahmen wirkten sich positiv auf die Effizienz der Solarzellen aus und trugen entscheidend zu einer Verbesserung der Zellen bei. Insgesamt liefern die gewonnen Erkenntnisse einen detaillierten berblick ber die Herausforderungen, welche bei dem Einsatz von wasserbasierten Dispersionen auftreten. Die Anforderungen partikelbasierter Solarzellen konnten offengelegt werden, dadurch gelang die Herstellung einer Solarzelle mit einer Effizienz von 0.53%. Dieses Ergebnis stellt jedoch noch nicht das Optimum dar und lsst noch Mglichkeiten fr Verbesserungen offen.
Resumo:
This thesis deals with the development of a novel simulation technique for macromolecules in electrolyte solutions, with the aim of a performance improvement over current molecular-dynamics based simulation methods. In solutions containing charged macromolecules and salt ions, it is the complex interplay of electrostatic interactions and hydrodynamics that determines the equilibrium and non-equilibrium behavior. However, the treatment of the solvent and dissolved ions makes up the major part of the computational effort. Thus an efficient modeling of both components is essential for the performance of a method. With the novel method we approach the solvent in a coarse-grained fashion and replace the explicit-ion description by a dynamic mean-field treatment. Hence we combine particle- and field-based descriptions in a hybrid method and thereby effectively solve the electrokinetic equations. The developed algorithm is tested extensively in terms of accuracy and performance, and suitable parameter sets are determined. As a first application we study charged polymer solutions (polyelectrolytes) in shear flow with focus on their viscoelastic properties. Here we also include semidilute solutions, which are computationally demanding. Secondly we study the electro-osmotic flow on superhydrophobic surfaces, where we perform a detailed comparison to theoretical predictions.
Resumo:
Future experiments in nuclear and particle physics are moving towards the high luminosity regime in order to access rare processes. In this framework, particle detectors require high rate capability together with excellent timing resolution for precise event reconstruction. In order to achieve this, the development of dedicated FrontEnd Electronics (FEE) for detectors has become increasingly challenging and expensive. Thus, a current trend in R&D is towards flexible FEE that can be easily adapted to a great variety of detectors, without impairing the required high performance. This thesis reports on a novel FEE for two different detector types: imaging Cherenkov counters and plastic scintillator arrays. The former requires high sensitivity and precision for detection of single photon signals, while the latter is characterized by slower and larger signals typical of scintillation processes. The FEE design was developed using high-bandwidth preamplifiers and fast discriminators which provide Time-over-Threshold (ToT). The use of discriminators allowed for low power consumption, minimal dead-times and self-triggering capabilities, all fundamental aspects for high rate applications. The output signals of the FEE are readout by a high precision TDC system based on FPGA. The performed full characterization of the analogue signals under realistic conditions proved that the ToT information can be used in a novel way for charge measurements or walk corrections, thus improving the obtainable timing resolution. Detailed laboratory investigations proved the feasibility of the ToT method. The full readout chain was investigated in test experiments at the Mainz Microtron: high counting rates per channel of several MHz were achieved, and a timing resolution of better than 100 ps after walk correction based on ToT was obtained. Ongoing applications to fast Time-of-Flight counters and future developments of FEE have been also recently investigated.