9 resultados para nutrients and sulfur application
em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha
Resumo:
Liquid Crystal Polymer Brushes and their Application as Alignment Layers in Liquid Crystal Cells Polymer brushes with liquid crystalline (LC) side chains were synthesized on planar glass substrates and their nematic textures were investigated. The LC polymers consist of an acrylate or a methacrylate main chain and a phenyl benzoate group as the mesogenic unit which is connected to the main chain via a flexible alkyl spacer composed of six CH2 units. The preparation of the LC polymer brushes was carried out according to the grafting from technique: polymerization is carried out from azo-initiators that have been previously self-assembled on the substrate. LC polymer brushes with a thickness from a few nm to 230 nm were synthesized by varying the monomer concentration and the polymerization time. The LC polymer brushes were thick enough to allow for direct observation of the nematic textures with a polarizing microscope. The LC polymer brushes grown on untreated glass substrates exhibited irregular textures (polydomains). The domain size is in the range of some micrometers and depends only weakly on the brush thickness. The investigations on the texture-temperature relationship of the LC brushes revealed that the brushes exhibit a surface memory effect, that is, the identical texture reappears after the LC brush sample has experienced a thermal isotropization or a solvent treatment, at which the nematic LC state has been completely destroyed. The surface memory effect is attributed to a strong anchoring of the orientation of the mesogenic units to heterogeneities at the substrate surface. The exact nature of the surface heterogeneities is unknown. The effect was observed for the LC brushes swollen with low molecular weight nematic molecules, as well. Rubbing the glass substrate with a piece of velvet cloth prior to the surface modification with the initiator and the brush growth gives rise to the formation of homogenous alignment of the mesogenic units in the LC polymer side chains. Monodomain textures were obtained for these LC brushes. The mechanism for the homogeneous alignment is based on the transfer of Nylon fibers during the rubbing process. A surfactant was mixed with the azo-initiator in modifying rubbed substrates for subsequent brush generation. Such brushes exhibited biaxial optical properties. Hybrid LC cells made from a substrate modified with biaxial brushes and a rubbed glass substrate show an orientation with a tilt angle of a = 15.6 . This work shows that LC brushes grown on rubbed surfaces fulfill the important criteria for alignment layers: the formation of macroscopic monodomains. First results indicate that by diluting the brush with molecules which are also covalently bound to the surface but induce a different orientation, a system is obtained in which the two conflicting alignment mechanisms can be used to generate a tilted alignment. In order to allow for an application of the alignment layers into a potential product, subsequent work should focus on the questions how easy and in which range the tilt angle can be controlled.
Resumo:
A path integral simulation algorithm which includes a higher-order Trotter approximation (HOA)is analyzed and compared to an approach which includes the correct quantum mechanical pair interaction (effective Propagator (EPr)). It is found that the HOA algorithmconverges to the quantum limit with increasing Trotter number P as P^{-4}, while the EPr algorithm converges as P^{-2}.The convergence rate of the HOA algorithm is analyzed for various physical systemssuch as a harmonic chain,a particle in a double-well potential, gaseous argon, gaseous helium and crystalline argon. A new expression for the estimator for the pair correlation function in the HOA algorithm is derived. A new path integral algorithm, the hybrid algorithm, is developed.It combines an exact treatment of the quadratic part of the Hamiltonian and thehigher-order Trotter expansion techniques.For the discrete quantum sine-Gordon chain (DQSGC), it is shown that this algorithm works more efficiently than all other improved path integral algorithms discussed in this work. The new simulation techniques developed in this work allow the analysis of theDQSGC and disordered model systems in the highly quantum mechanical regime using path integral molecular dynamics (PIMD)and adiabatic centroid path integral molecular dynamics (ACPIMD).The ground state phonon dispersion relation is calculated for the DQSGC by the ACPIMD method.It is found that the excitation gap at zero wave vector is reduced by quantum fluctuations. Two different phases exist: One phase with a finite excitation gap at zero wave vector, and a gapless phase where the excitation gap vanishes.The reaction of the DQSGC to an external driving force is analyzed at T=0.In the gapless phase the system creeps if a small force is applied, and in the phase with a gap the system is pinned. At a critical force, the systems undergo a depinning transition in both phases and flow is induced. The analysis of the DQSGC is extended to models with disordered substrate potentials. Three different cases are analyzed: Disordered substrate potentials with roughness exponent H=0, H=1/2,and a model with disordered bond length. For all models, the ground state phonon dispersion relation is calculated.
Resumo:
In this treatise we consider finite systems of branching particles where the particles move independently of each other according to d-dimensional diffusions. Particles are killed at a position dependent rate, leaving at their death position a random number of descendants according to a position dependent reproduction law. In addition particles immigrate at constant rate (one immigrant per immigration time). A process with above properties is called a branching diffusion withimmigration (BDI). In the first part we present the model in detail and discuss the properties of the BDI under our basic assumptions. In the second part we consider the problem of reconstruction of the trajectory of a BDI from discrete observations. We observe positions of the particles at discrete times; in particular we assume that we have no information about the pedigree of the particles. A natural question arises if we want to apply statistical procedures on the discrete observations: How can we find couples of particle positions which belong to the same particle? We give an easy to implement 'reconstruction scheme' which allows us to redraw or 'reconstruct' parts of the trajectory of the BDI with high accuracy. Moreover asymptotically the whole path can be reconstructed. Further we present simulations which show that our partial reconstruction rule is tractable in practice. In the third part we study how the partial reconstruction rule fits into statistical applications. As an extensive example we present a nonparametric estimator for the diffusion coefficient of a BDI where the particles move according to one-dimensional diffusions. This estimator is based on the Nadaraya-Watson estimator for the diffusion coefficient of one-dimensional diffusions and it uses the partial reconstruction rule developed in the second part above. We are able to prove a rate of convergence of this estimator and finally we present simulations which show that the estimator works well even if we leave our set of assumptions.
Resumo:
Within this PhD thesis several methods were developed and validated which can find applicationare suitable for environmental sample and material science and should be applicable for monitoring of particular radionuclides and the analysis of the chemical composition of construction materials in the frame of ESS project. The study demonstrated that ICP-MS is a powerful analytical technique for ultrasensitive determination of 129I, 90Sr and lanthanides in both artificial and environmental samples such as water and soil. In particular ICP-MS with collision cell allows measuring extremely low isotope ratios of iodine. It was demonstrated that isotope ratios of 129I/127I as low as 10-7 can be measured with an accuracy and precision suitable for distinguishing sample origins. ICP-MS with collision cell, in particular in combination with cool plasma conditions, reduces the influence of isobaric interferences on m/z = 90 and is therefore well-suited for 90Sr analysis in water samples. However, the applied ICP-CC-QMS in this work is limited for the measurement of 90Sr due to the tailing of 88Sr+ and in particular Daly detector noise. Hyphenation of capillary electrophoresis with ICP-MS was shown to resolve atomic ions of all lanthanides and polyatomic interferences. The elimination of polyatomic and isobaric ICP-MS interferences was accomplished without compromising the sensitivity by the use of a high resolution mode as available on ICP-SFMS. Combination of laser ablation with ICP-MS allowed direct micro and local uranium isotope ratio measurements at the ultratrace concentrations on the surface of biological samples. In particular, the application of a cooled laser ablation chamber improves the precision and accuracy of uranium isotopic ratios measurements in comparison to the non-cooled laser ablation chamber by up to one order of magnitude. In order to reduce the quantification problem, a mono gas on-line solution-based calibration was built based on the insertion of a microflow nebulizer DS-5 directly into the laser ablation chamber. A micro local method to determine the lateral element distribution on NiCrAlY-based alloy and coating after oxidation in air was tested and validated. Calibration procedures involving external calibration, quantification by relative sensitivity coefficients (RSCs) and solution-based calibration were investigated. The analytical method was validated by comparison of the LA-ICP-MS results with data acquired by EDX.
Resumo:
During the last years great effort has been devoted to the fabrication of superhydrophobic surfaces because of their self-cleaning properties. A water drop on a superhydrophobic surface rolls off even at inclinations of only a few degrees while taking up contaminants encountered on its way. rnSuperhydrophobic, self-cleaning coatings are desirable for convenient and cost-effective maintenance of a variety of surfaces. Ideally, such coatings should be easy to make and apply, mechanically resistant, and long-term stable. None of the existing methods have yet mastered the challenge of meeting all of these criteria.rnSuperhydrophobicity is associated with surface roughness. The lotus leave, with its dual scale roughness, is one of the most efficient examples of superhydrophobic surface. This thesis work proposes a novel technique to prepare superhydrophobic surfaces that introduces the two length scale roughness by growing silica particles (~100 nm in diameter) onto micrometer-sized polystyrene particles using the well-established Stöber synthesis. Mechanical resistance is conferred to the resulting “raspberries” by the synthesis of a thin silica shell on their surface. Besides of being easy to make and handle, these particles offer the possibility for improving suitability or technical applications: since they disperse in water, multi-layers can be prepared on substrates by simple drop casting even on surfaces with grooves and slots. The solution of the main problem – stabilizing the multilayer – also lies in the design of the particles: the shells – although mechanically stable – are porous enough to allow for leakage of polystyrene from the core. Under tetrahydrofuran vapor polystyrene bridges form between the particles that render the multilayer-film stable. rnMulti-layers are good candidate to design surfaces whose roughness is preserved after scratch. If the top-most layer is removed, the roughness can still be ensured by the underlying layer.rnAfter hydrophobization by chemical vapor deposition (CVD) of a semi-fluorinated silane, the surfaces are superhydrophobic with a tilting angle of a few degrees. rnrnrn
Resumo:
In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.
Resumo:
Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.
Resumo:
Graphene, the thinnest two-dimensional material possible, is considered as a realistic candidate for the numerous applications in electronic, energy storage and conversion devices due to its unique properties, such as high optical transmittance, high conductivity, excellent chemical and thermal stability. However, the electronic and chemical properties of graphene are highly dependent on their preparation methods. Therefore, the development of novel chemical exfoliation process which aims at high yield synthesis of high quality graphene while maintaining good solution processability is of great concern. This thesis focuses on the solution production of high-quality graphene by wet-chemical exfoliation methods and addresses the applications of the chemically exfoliated graphene in organic electronics and energy storage devices.rnPlatinum is the most commonly used catalysts for fuel cells but they suffered from sluggish electron transfer kinetics. On the other hand, heteroatom doped graphene is known to enhance not only electrical conductivity but also long term operation stability. In this regard, a simple synthetic method is developed for the nitrogen doped graphene (NG) preparation. Moreover, iron (Fe) can be incorporated into the synthetic process. As-prepared NG with and without Fe shows excellent catalytic activity and stability compared to that of Pt based catalysts.rnHigh electrical conductivity is one of the most important requirements for the application of graphene in electronic devices. Therefore, for the fabrication of electrically conductive graphene films, a novel methane plasma assisted reduction of GO is developed. The high electrical conductivity of plasma reduced GO films revealed an excellent electrochemical performance in terms of high power and energy densities when used as an electrode in the micro-supercapacitors.rnAlthough, GO can be prepared in bulk scale, large amount of defect density and low electrical conductivity are major drawbacks. To overcome the intrinsic limitation of poor quality of GO and/or reduced GO, a novel protocol is extablished for mass production of high-quality graphene by means of electrochemical exfoliation of graphite. The prepared graphene shows high electrical conductivity, low defect density and good solution processability. Furthermore, when used as electrodes in organic field-effect transistors and/or in supercapacitors, the electrochemically exfoliated graphene shows excellent device performances. The low cost and environment friendly production of such high-quality graphene is of great importance for future generation electronics and energy storage devices. rn
Resumo:
Hybrid Elektrodenmaterialien (HEM) sind der Schlüssel zu grundlegenden Fortschritten in der Energiespeicherung und Systemen zur Energieumwandlung, einschließlich Lithium-Ionen-Batterien (LiBs), Superkondensatoren (SCs) und Brennstoffzellen (FCs). Die faszinierenden Eigenschaften von Graphen machen es zu einem guten Ausgangsmaterial für die Darstellung von HEM. Jedoch scheitern traditionelle Verfahren zur Herstellung von Graphen-HEM (GHEM) scheitern häufig an der fehlenden Kontrolle über die Morphologie und deren Einheitlichkeit, was zu unzureichenden Grenzflächenwechselwirkungen und einer mangelhaften Leistung des Materials führt. Diese Arbeit konzentriert sich auf die Herstellung von GHEM über kontrollierte Darstellungsmethoden und befasst sich mit der Nutzung von definierten GHEM für die Energiespeicherung und -umwandlung. Die große Volumenausdehnung bildet den Hauptnachteil der künftigen Lithium-Speicher-Materialien. Als erstes wird ein dreidimensionaler Graphen Schaumhybrid zur Stärkung der Grundstruktur und zur Verbesserung der elektrochemischen Leistung des Fe3O4 Anodenmaterials dargestellt. Der Einsatz von Graphenschalen und Graphennetzen realisiert dabei einen doppelten Schutz gegen die Volumenschwankung des Fe3O4 bei dem elektrochemischen Prozess. Die Leistung der SCs und der FCs hängt von der Porenstruktur und der zugänglichen Oberfläche, beziehungsweise den katalytischen Stellen der Elektrodenmaterialien ab. Wir zeigen, dass die Steuerung der Porosität über Graphen-basierte Kohlenstoffnanoschichten (HPCN) die zugängliche Oberfläche und den Ionentransport/Ladungsspeicher für SCs-Anwendungen erhöht. Desweiteren wurden Stickstoff dotierte Kohlenstoffnanoschichten (NDCN) für die kathodische Sauerstoffreduktion (ORR) hergestellt. Eine maßgeschnittene Mesoporosität verbunden mit Heteroatom Doping (Stickstoff) fördert die Exposition der aktiven Zentren und die ORR-Leistung der metallfreien Katalysatoren. Hochwertiges elektrochemisch exfoliiertes Graphen (EEG) ist ein vielversprechender Kandidat für die Darstellung von GHEM. Allerdings ist die kontrollierte Darstellung von EEG-Hybriden weiterhin eine große Herausforderung. Zu guter Letzt wird eine Bottom-up-Strategie für die Darstellung von EEG Schichten mit einer Reihe von funktionellen Nanopartikeln (Si, Fe3O4 und Pt NPs) vorgestellt. Diese Arbeit zeigt einen vielversprechenden Weg für die wirtschaftliche Synthese von EEG und EEG-basierten Materialien.